1. Study looks at language often used by people with ADHD on Twitter

    November 20, 2017 by Ashley

    From the University of Pennsylvania press release:

    What can Twitter reveal about people with attention-deficit/hyperactivity disorder, or ADHD? Quite a bit about what life is like for someone with the condition, according to findings published by University of Pennsylvania researchers Sharath Chandra Guntuku and Lyle Ungar in the Journal of Attention Disorders. Twitter data might also provide clues to help facilitate more effective treatments.

    “On social media, where you can post your mental state freely, you get a lot of insight into what these people are going through, which might be rare in a clinical setting,” said Guntuku, a postdoctoral researcher working with the World Well-Being Project in the School of Arts and Sciences and the Penn Medicine Center for Digital Health. “In brief 30- or 60-minute sessions with patients, clinicians might not get all manifestations of the condition, but on social media you have the full spectrum.”

    Guntuku and Ungar, a professor of computer and information science with appointments in the School of Engineering and Applied Science, the School of Arts and Sciences, the Wharton School and Penn Medicine, turned to Twitter to try to understand what people with ADHD spend their time talking about. The researchers collected 1.3 million publicly available tweets posted by almost 1,400 users who had self-reported diagnoses of ADHD, plus an equivalent control set that matched the original group in age, gender and duration of overall social-media activity. They then ran models looking at factors like personality and posting frequency.

    “Some of the findings are in line with what’s already known in the ADHD literature,” Guntuku said. For example, social-media posters in the experimental group often talked about using marijuana for medicinal purposes. “Our coauthor, Russell Ramsay, who treats people with ADHD, said this is something he’s observed in conversations with patients,” Guntuku added.

    The researchers also found that people with ADHD tended to post messages related to lack of focus, self-regulation, intention and failure, as well as expressions of mental, physical and emotional exhaustion. They often used words like “hate,” “disappointed,” “cry” and “sad” more frequently than the control group and often posted during hours of the day when the majority of people sleep, from midnight to 6 a.m.

    “People with ADHD are experiencing more mood swings and more negativity,” Ungar said. “They tend to have problems self-regulating.”

    This could partially explain why they enjoy social media’s quick feedback loop, he said. A well-timed or intriguing tweet could yield a positive response within minutes, propelling continued use of the online outlet.

    Using information gleaned from this study and others, Ungar and Guntuku said they plan to build condition-specific apps that offer insight into several conditions, including ADHD, stress, anxiety, depression and opioid addiction. They aim to factor in facets of individuals, their personality or how severe their ADHD is, for instance, as well as what triggers particular symptoms.

    The applications will also include mini-interventions. A recommendation for someone who can’t sleep might be to turn off the phone an hour before going to bed. If anxiety or stress is the major factor, the app might suggest an easy exercise like taking a deep breath, then counting to 10 and back to zero.

    “If you’re prone to certain problems, certain things set you off; the idea is to help set you back on track,” Ungar said.

    Better understanding ADHD has the potential to help clinicians treat such patients more successfully, but having this information also has a downside: It can reveal aspects of a person’s personality unintentionally, simply by analyzing words posted on Twitter. The researchers also acknowledge that the 50-50 split of ADHD to non-ADHD study participants isn’t true to life; only about 8 percent of adults in the U.S. have the disorder, according to the National Institute of Mental Health. In addition, people in this study self-reported an ADHD diagnosis rather than having such a determination come from a physician interaction or medical record.

    Despite these limitations, the researchers say the work has strong potential to help clinicians understand the varying manifestations of ADHD, and it could be used as a complementary feedback tool to give ADHD sufferers personal insights.

    “The facets of better-studied conditions like depression are pretty well understood,” Ungar said. “ADHD is less well studied. Understanding the components that some people have or don’t have, the range of coping mechanisms that people use — that all leads to a better understanding of the condition.”


  2. Researchers design a gait recognition method that can overcome intra-subject variations by view differences

    November 18, 2017 by Ashley

    From the Osaka University press release:

    Biometric-based person recognition methods have been extensively explored for various applications, such as access control, surveillance, and forensics. Biometric verification involves any means by which a person can be uniquely identified through biological traits such as facial features, fingerprints, hand geometry, and gait, which is a person’s manner of walking.

    Gait is a practical trait for video-based surveillance and forensics because it can be captured at a distance on video. In fact, gait recognition has been already used in practical cases in criminal investigations. However, gait recognition is susceptible to intra-subject variations, such as view angle, clothing, walking speed, shoes, and carrying status. Such hindering factors have prompted many researchers to explore new approaches with regard to these variations.

    Research harnessing the capabilities of deep learning frameworks to improve gait recognition methods has been geared to convolutional neural network (CNN) frameworks, which take into account computer vision, pattern recognition, and biometrics. A convolutional signal means combining any two of these signals to form a third that provides more information.

    An advantage of a CNN-based approach is that network architectures can easily be designed for better performance by changing inputs, outputs, and loss functions. Nevertheless, a team of Osaka University-centered researchers noticed that existing CNN-based cross-view gait recognition fails to address two important aspects.

    “Current CNN-based approaches are missing the aspects on verification versus identification, and the trade-off between spatial displacement, that is, when the subject moves from one location to another,” study lead author Noriko Takemura explains.

    Considering these two aspects, the researchers designed input/output architectures for CNN-based cross-view gait recognition. They employed a Siamese network for verification, where an input is a pair of gait features for matching, and an output is genuine (the same subjects) or imposter (different subjects) probability.

    Notably, the Siamese network architectures are insensitive to spatial displacement, as the difference between a matching pair is calculated at the last layer after passing through the convolution and max pooling layers, which reduces the gait image dimensionality and allows for assumptions to be made about hidden features. They can therefore be expected to have higher performance under considerable view differences. The researchers also used CNN architectures where the difference between a matching pair is calculated at the input level to make them more sensitive to spatial displacement.

    “We conducted experiments for cross-view gait recognition and confirmed that the proposed architectures outperformed the state-of-the-art benchmarks in accordance with their suitable situations of verification/identification tasks and view differences,” coauthor Yasushi Makihara says.

    As spatial displacement is caused not only by view difference but also walking speed difference, carrying status difference, clothing difference, and other factors, the researchers plan to further evaluate their proposed method for gait recognition with spatial displacement caused by other covariates.


  3. Researchers teach computer to recognize emotions in speech

    by Ashley

    From the National Research University Higher School of Economics press release:

    Experts of the Faculty of Informatics, Mathematics, and Computer Science at the Higher School of Economics have created an automatic system capable of identifying emotions in the sound of a voice. Their report was presented at a major international conference – Neuroinformatics-2017. https://link.springer.com/chapter/10.1007/978-3-319-66604-4_18

    For a long time, computers have successfully converted speech into text. However, the emotional component, which is important for conveying meaning, has been neglected. For example, for the same question ‘Is everything okay?’, people can answer ‘Of course it is!’ with different intonations: calm, provoking, cheerful, etc. And the reactions will be completely different.

    Neural networks are processors connected with each other and capable of learning, analysis and synthesis. This smart system surpasses traditional algorithms in that the interaction between a person and computer becomes more interactive.

    HSE researchers Anastasia Popova, Alexander Rassadin and Alexander Ponomarenko have trained a neural network to recognize eight different emotions: neutral, calm, happy, sad, angry, scared, disgusted, and surprised. In 70% of cases the computer identified the emotion correctly, say the researchers.

    The researchers have transformed the sound into images – spectrograms – which allowed them to work with sound using the methods applied for image recognition. A deep learning convolutional neural network with VGG-16 architecture was used in the research.

    The researchers note that the programme successfully distinguishes neutral and calm tones, while happiness and surprise are not always recognized well. Happiness is often perceived as fear and sadness, and surprise is interpreted as disgust.


  4. Study suggests ways to make email and other technology interruptions productive

    November 11, 2017 by Ashley

    From the Stephen J.R. Smith School of Business, Queen’s University press release:

    The average knowledge worker enjoys a measly five minutes of uninterrupted time and, once interrupted, half won’t even get back to what they were doing in the first place. Yet organizational expectations or social pressures make it hard to resist the urge to check incoming emails or text messages — pressing tasks be damned.

    Research suggests that such interruptions are not necessarily bad, and can even be productive.

    “I know from personal experience that some interruptions are actually good,” says Shamel Addas, an assistant professor of information systems at Smith School of Business. “It depends on the content and timing. You can get some critical information that will help, like completing your task. So that’s one of the assumptions I feel needs to be challenged and tested.”

    Addas has conducted several studies to learn more about the relationship between technology-related interruptions and performance.

    He found that interruptions that did not relate to primary activities undermined workers’ performance — they led to higher error rates, poorer memory, and lower output quality. It also took longer for workers to return to their primary work and complete their tasks. Such interruptions also had an indirect effect on performance by increasing workers’ stress levels.

    Email interruptions that related to workers’ primary activities increased stress levels as well but also boosted workers’ performance. Such interruptions were tied to mindful processing of task activities, which led to better performance both in terms of efficiency and effectiveness.

    Addas also discovered that the very features of the interrupting technology can influence the outcomes of the interruptions positively or negatively. Individuals who engaged in several email threads of conversations at the same time or those who kept getting interrupted by email but let the messages pile up in their in-box experienced higher stress levels and lower performance.

    And those who reprocessed their received messages during interruptions episodes or rehearsed their message responses before sending saw some benefits. Doing this enabled them to process their tasks more mindfully, Addas says, which boosted their performance.

    What does this mean for managers? For one thing, just recognizing that there are different types of interruptions, each with its own trade off, can help managers mitigate the negative impacts on performance and stress.

    Addas suggests that managers develop email management programs and interventions, such as specifying a time-response window for emails based on their urgency or relevance to primary activities. They can also establish periods of quiet time for uninterrupted work. And they can encourage work groups to develop effective coordination strategies to ensure one person’s interruptions do not adversely affect colleagues.

    As for individuals, they can start handling interruptions in batch rather than in real time to reduce the costs of switching back and forth between tasks and interruptions. To reduce stress from overload, Addas suggests, people should limit parallel exchanges during interruptions and delete or folder messages that are of limited use for their core work.

    “People might well consider thinking about the messages they construct and examining carefully their previously received messages as needed to ensure that they process their tasks more mindfully, which is beneficial for performance,” Addas says.

    Addas believes there are design implications to consider as well, particularly relating to context aware systems and email clients. Context aware systems know what kinds of tasks people are working on and can detect high and low periods of workload. “Email clients can be programmed to screen messages for task-relevant content and distinguish between incongruent and congruent interruptions,” he says. “They can then manipulate the timing at which each type of interruption is displayed to users, such as masking incongruent interruptions until a later time.”


  5. Study suggests effectiveness of online social networks designed to help smokers quit

    November 1, 2017 by Ashley

    From the University of Iowa press release:

    Online social networks designed to help smokers kick the tobacco habit are effective, especially if users are active participants, according to a new study from the University of Iowa and the Truth Initiative, a nonprofit anti-tobacco organization.

    The study examined the tobacco use of more than 2,600 smokers who participated in BecomeAnEX.org, Truth Initiative’s online smoking cessation community designed in collaboration with the Mayo Clinic. The study found that 21 percent of those classified as active users after their first week in the community reported that they quit smoking three months later. Those who were less active in the community were less likely to quit.

    Kang Zhao, assistant professor of management sciences in the UI Tippie College of Business and the study’s co-author, says the results show that online interactions can predict offline behavior.

    How central you become in the online social network after the first week is a good indicator of whether you will quit smoking,” says Zhao. “This is the first study to look at smokers’ behaviors in an online community over time and to report a prospective relationship between social network involvement and quitting smoking.”

    The BecomeAnEX website enables members to share information and support through blogs, forums, and messages. Although the site is focused on smoking cessation, users can post on any topic. More than 800,000 users have registered since the site launched in 2008, resulting in a large, active community of current and former tobacco users supporting each other.

    Funded by the National Cancer Institute, the study constructed a large-scale social network based on users’ posting habits. Zhao says a key finding was that increasing integration into the social network was a significant predictor of subsequent abstinence. Three months after joining the BecomeAnEX social network, users who stayed involved on the site were more likely to have quit smoking when researchers contacted them to assess their smoking status.

    After three months, 21 percent of active users — or those who actively contributed content in the community — quit smoking; 11 percent of passive users — those who only read others’ posts — quit smoking; and only 8 percent of study participants that never visited quit smoking.

    The study did not examine why greater community involvement had such a positive effect on smoking cessation. Researchers speculate it may be because of powerful social network influences.

    “Spending time with others who are actively engaged in quitting smoking in a place where being a nonsmoker is supported and encouraged gives smokers the practical advice and support they need to stay with a difficult behavior change,” says Amanda Graham, senior vice president, Innovations, of Truth Initiative and lead author. “We know that quitting tobacco can be extremely difficult. These results demonstrate what we hear from tobacco users, which is that online social connections and relationships can make a real difference.”


  6. Study suggests public commitment to weight loss goals can help with achieving them

    October 31, 2017 by Ashley

    From the American University press release:

    About those before and after selfies and public declarations of hitting the gym? New research co-authored by Dr. Sonya A. Grier, professor of marketing in the American University Kogod School of Business, confirms these announcements and progress updates are useful for the achievement of weight and fitness goals. “Weight Loss Through Virtual Support Communities: A Role for Identity-based Motivation in Public Commitment,” published in the Journal of Interactive Marketing, examines the role of virtual communities and public commitment to setting and weight loss goals.

    The study tracks two communities of weight loss groups, surgical and non-surgical over a four-year period. They found that participation and sharing of successes and setbacks in virtual support communities (VSC) is a key part of achieving goals through the public commitment to lose weight.

    “In our investigation of VSCs, we find social identity motivates public commitment in support of goal attainment,” the researchers write.

    Grier says, “The sharing of intimate information and photos about weight loss goals in virtual space is a key factor in motivating behaviors that fulfill that new thinner identity and thus helps people reach their goals.”

    Bloggers like Audrey* shared old photos in search of a “pretty and slim” version of herself.

    “Here is my picture of 28 years ago when I was young, pretty and slim,” Audrey* wrote in a post. “Makes me wanna cry… I can’t get any younger, but I sure can get closer to that weight! Stop crying, start losing weight, girl!”

    Others, like Darlene* shared milestones.

    “I have good news to report. My hard work of eating right and working out has paid off. I am now in ONDERLAND!!!! I weighed in this morning at 196lbs! YES, I did it. I reached my first goal to be under 200lbs and before my cruise on October 16th. I can’t believe I did it! I’m so proud of myself.”

    Ultimately, Grier says, VSCs allow for relative anonymity, accessibility, availability and flexibility in how users represent themselves on their journeys. The process of building community, even in relative anonymity helps with keeping participants motivated and accountable.

    “Not everyone can get the support they need from the people they interact with in person on a daily basis. It is helpful that technology can support community building and goal achievement in virtual spaces.”

    *Names have been changed.


  7. Study suggests more teens than ever aren’t getting enough sleep

    October 27, 2017 by Ashley

    From the San Diego State University press release:

    If you’re a young person who can’t seem to get enough sleep, you’re not alone: A new study led by San Diego State University Professor of Psychology Jean Twenge finds that adolescents today are sleeping fewer hours per night than older generations. One possible reason? Young people are trading their sleep for smartphone time.

    Most sleep experts agree that adolescents need 9 hours of sleep each night to be engaged and productive students; less than 7 hours is considered to be insufficient sleep. A peek into any bleary-eyed classroom in the country will tell you that many youths are sleep-deprived, but it’s unclear whether young people today are in fact sleeping less.

    To find out, Twenge, along with psychologist Zlatan Krizan and graduate student Garrett Hisler — both at Iowa State University in Ames — examined data from two long-running, nationally representative, government-funded surveys of more than 360,000 teenagers. The Monitoring the Future survey asked U.S. students in the 8th, 10th and 12th grades how frequently they got at least 7 hours of sleep, while the Youth Risk Behavior Surveillance System survey asked 9th-12th-grade students how many hours of sleep they got on an average school night.

    Combining and analyzing data from both surveys, the researchers found that about 40% of adolescents in 2015 slept less than 7 hours a night, which is 58% more than in 1991 and 17% more than in 2009.

    Delving further into the data, the researchers learned that the more time young people reported spending online, the less sleep they got. Teens who spent 5 hours a day online were 50% more likely to not sleep enough than their peers who only spent an hour online each day.

    Beginning around 2009, smartphone use skyrocketed, which Twenge believes might be responsible for the 17% bump between 2009 and 2015 in the number of students sleeping 7 hours or less. Not only might teens be using their phones when they would otherwise be sleeping, the authors note, but previous research suggests the light wavelengths emitted by smartphones and tablets can interfere with the body’s natural sleep-wake rhythm. The researchers reported their findings in the journal Sleep Medicine.

    “Teens’ sleep began to shorten just as the majority started using smartphones,” said Twenge, author of iGen: Why Today’s Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy — And Completely Unprepared for Adulthood. “It’s a very suspicious pattern.”

    Students might compensate for that lack of sleep by dozing off during daytime hours, adds Krizan.

    “Our body is going to try to meet its sleep needs, which means sleep is going to interfere or shove its nose in other spheres of our lives,” he said. “Teens may catch up with naps on the weekend or they may start falling asleep at school.”

    For many, smartphones and tablets are an indispensable part of everyday life, so they key is moderation, Twenge stresses. Limiting usage to 2 hours a day should leave enough time for proper sleep, she says. And that’s valuable advice for young and old alike.

    “Given the importance of sleep for both physical and mental health, both teens and adults should consider whether their smartphone use is interfering with their sleep,” she says. “It’s particularly important not to use screen devices right before bed, as they might interfere with falling asleep.”


  8. Study suggests phones are keeping students from concentrating during lectures

    October 20, 2017 by Ashley

    From the Stellenbosch University press release:

    Digital technologies, especially smartphones, have become such an integral part of our lives that it is difficult to picture life without them. Today, people spend over three hours on their phones every day.

    “While ever-smarter digital devices have made many aspects of our lives easier and more efficient, a growing body of evidence suggests that, by continuously distracting us, they are harming our ability to concentrate,” say researchers Dr Daniel le Roux and Mr Douglas Parry from the Cognition and Technology Research Group in the Department of Information Science at Stellenbosch University.

    Le Roux heads the research group, while Parry is a doctoral candidate. Their work focuses on the impact of digital media, particularly phones, on students’ ability to concentrate in the classroom.

    According to them, today’s students are digital natives ? individuals born after 1980 – who have grown up surrounded by digital media and quickly adapted to this environment to such an extent that “they are constantly media-multitasking, that is, concurrently engaging with, and rapidly switching between, multiple media to stay connected, always updated and always stimulated.”

    The researchers say it shouldn’t be surprising that university lecturers are encouraged to develop blended learning initiatives and bring tech – videos, podcasts, Facebook pages, etc. – into the classroom more and more to offer students the enhanced experiences enabled by digital media.

    They warn, however, that an important effect of these initiatives has been to establish media use during university lectures as the norm.

    “Studies by ourselves and researchers across the world show that students constantly use their phones when they are in class.

    “But here’s the kicker: if you think they are following the lecture slides or engaging in debates about the topic you are mistaken. In fact, this is hardly ever the case. When students use their phones during lectures they do it to communicate with friends, engage in social networks, watch YouTube videos or just browse around the web to follow their interests.”

    The researchers say there are two primary reasons why this form of behaviour is problematic from a cognitive control and learning perspective.

    “The first is that when we engage in multitasking our performance on the primary task suffers. Making sense of lecture content is very difficult when you switch attention to your phone every five minutes. A strong body of evidence supports this, showing that media use during lectures is associated with lower academic performance.”

    “The second reason is that it harms students’ ability to concentrate on any particular thing for an extended period of time. They become accustomed to switching to alternative streams of stimuli at increasingly short intervals. The moment the lecture fails to engage or becomes difficult to follow, the phones come out.”

    The researchers say awareness of this trend has prompted some lecturers, even at leading tech-oriented universities like MIT in the United States, to declare their lectures device-free in an attempt to cultivate engagement, attentiveness and, ultimately, critical thinking skills among their students.

    “No one can deny that mobile computing devices make our lives easier and more fun in a myriad of ways. But, in the face of all the connectedness and entertainment they offer, we should be mindful of the costs.”

    The researchers encourage educational policy makers and lecturers, in particular, to consider the implications of their decisions with a much deeper awareness of the dynamics between technology use and the cognitive functions which enable us to learn.


  9. Study suggests game design elements can help increase physical activity among adults

    October 17, 2017 by Ashley

    From the JAMA Network Journals press release:

    Physical activity increased among families in a randomized clinical trial as part of a game-based intervention where they could earn points and progress through levels based on step goal achievements, according to a new article published by JAMA Internal Medicine.

    More than half of the adults in the United States don’t get enough physical activity. Gamification, which is the use of game design elements such as points and levels, is increasingly used in digital health interventions. However, evidence of their effectiveness is limited.

    Mitesh S. Patel, M.B.A., M.S., of the Perelman School of Medicine at the University of Pennsylvania, Philadelphia, and coauthors conducted a clinical trial among adults enrolled in the Framingham Heart Study, a long-standing cohort of families. The clinical trial included a 12-week intervention and 12 more weeks of follow-up among 200 adults from 94 families.

    All study participants tracked their daily step counts with either a wearable device or a smartphone to establish a baseline and then selected a step goal increase. They were given performance feedback by text or email for 24 weeks. About half of the adults participated in the gamification arm of the study and were entered into a game with their family where they could earn points and progress through levels as a way to enhance social incentives through collaboration, accountability and peer support, as well as physical activity.

    More than half of the participants were female and the average age was about 55. At the start of the trial, the average number of daily steps was 7,662 in the control group of the study and 7,244 in the group with the game-based intervention.

    During the 12-week intervention period, participants in the gamification arm achieved step goals on a greater proportion of participant-days (difference of 0.53 vs. 0.32) and they had a greater increase in average daily steps compared with baseline (difference of 1,661 vs. 636) than the control group, according to the results.

    While results show physical activity declined during the 12-week follow-up period in the gamification group, it was still better than that in the control group for the proportion of participant-days achieving step goals (difference of 0.44 vs. 0.33) and the average daily steps compared with baseline (difference of 1,385 vs. 798).

    The study notes some limitations, including ones that may limit generalizability such as all participants were members of the Framingham Heart Study, had European ancestry and needed a smartphone or a computer. Researchers also did not test the intervention’s effects in nonfamily networks.

    “Our findings suggest that gamification may offer a promising approach to change health behaviors if designed using insights from behavioral economics to enhance social incentives,” the authors conclude.


  10. Study suggests cellphone ownership may increase incidence of cyberbullying in grade school

    October 2, 2017 by Ashley

    From the American Academy of Pediatrics press release:

    Most research on cyberbullying has focused on adolescents. But a new study that examined cell phone ownership among children in third to fifth grades finds they may be particularly vulnerable to cyberbullying.

    The study abstract, “Cell Phone Ownership and Cyberbullying in 8-11 Year Olds: New Research,” will be presented Monday, Sept. 18 at the American Academy of Pediatrics National Conference & Exhibition in Chicago.

    Researchers collected survey data on 4,584 students in grades 3, 4 and 5 between 2014 and 2016. Overall, 9.5 percent of children reported being a victim of cyberbullying. Children who owned cell phones were significantly more likely to report being a victim of cyberbullying, especially in grades 3 and 4.

    “Parents often cite the benefits of giving their child a cell phone, but our research suggests that giving young children these devices may have unforeseen risks as well,” said Elizabeth K. Englander, Ph.D., a professor of psychology at Bridgewater State University in Bridgewater, Mass.

    Across all three grades, 49.6 of students reported owning a cell phone. The older the student, the more likely to report cell phone ownership: 59.8 percent of fifth graders, 50.6 percent of fourth graders, and 39.5 percent of third graders reported owning their own cell phone. Cell phone owners in grades three and four were more likely to report being a victim of cyberbullying. Across all three grades, more cell phone owners admitted they have been a cyberbully themselves.

    According to the researchers, the increased risk of cyberbullying related to phone ownership could be tied to increased opportunity and vulnerability. Continuous access to social media and texting increases online interactions, provides more opportunities to engage both positively and negatively with peers, and increases the chance of an impulsive response to peers’ postings and messages.

    Englander suggests that this research is a reminder for parents to consider the risks as well as the benefits when deciding whether to provide their elementary school-aged child with a cell phone.

    “At the very least, parents can engage in discussions and education with their child about the responsibilities inherent in owning a mobile device, and the general rules for communicating in the social sphere,” Englander said.