1. Study suggests smartphone addiction creates imbalance in brain

    December 6, 2017 by Ashley

    From the Radiological Society of North America press release:

    Researchers have found an imbalance in the brain chemistry of young people addicted to smartphones and the internet, according to a study presented today at the annual meeting of the Radiological Society of North America (RSNA).

    According to a recent Pew Research Center study, 46 percent of Americans say they could not live without their smartphones. While this sentiment is clearly hyperbole, more and more people are becoming increasingly dependent on smartphones and other portable electronic devices for news, information, games, and even the occasional phone call.

    Along with a growing concern that young people, in particular, may be spending too much time staring into their phones instead of interacting with others, come questions as to the immediate effects on the brain and the possible long-term consequences of such habits.

    Hyung Suk Seo, M.D., professor of neuroradiology at Korea University in Seoul, South Korea, and colleagues used magnetic resonance spectroscopy (MRS) to gain unique insight into the brains of smartphone- and internet-addicted teenagers. MRS is a type of MRI that measures the brain’s chemical composition.

    The study involved 19 young people (mean age 15.5, 9 males) diagnosed with internet or smartphone addiction and 19 gender- and age-matched healthy controls. Twelve of the addicted youth received nine weeks of cognitive behavioral therapy, modified from a cognitive therapy program for gaming addiction, as part of the study.

    Researchers used standardized internet and smartphone addiction tests to measure the severity of internet addiction. Questions focused on the extent to which internet and smartphone use affects daily routines, social life, productivity, sleeping patterns and feelings.

    “The higher the score, the more severe the addiction,” Dr. Seo said.

    Dr. Seo reported that the addicted teenagers had significantly higher scores in depression, anxiety, insomnia severity and impulsivity.

    The researchers performed MRS exams on the addicted youth prior to and following behavioral therapy and a single MRS study on the control patients to measure levels of gamma aminobutyric acid, or GABA, a neurotransmitter in the brain that inhibits or slows down brain signals, and glutamate-glutamine (Glx), a neurotransmitter that causes neurons to become more electrically excited. Previous studies have found GABA to be involved in vision and motor control and the regulation of various brain functions, including anxiety.

    The results of the MRS revealed that, compared to the healthy controls, the ratio of GABA to Glx was significantly increased in the anterior cingulate cortex of smartphone- and internet-addicted youth prior to therapy.

    Dr. Seo said the ratios of GABA to creatine and GABA to glutamate were significantly correlated to clinical scales of internet and smartphone addictions, depression and anxiety.

    Having too much GABA can result in a number of side effects, including drowsiness and anxiety.

    More study is needed to understand the clinical implications of the findings, but Dr. Seo believes that increased GABA in the anterior cingulate gyrus in internet and smartphone addiction may be related to the functional loss of integration and regulation of processing in the cognitive and emotional neural network.

    The good news is GABA to Glx ratios in the addicted youth significantly decreased or normalized after cognitive behavioral therapy.

    “The increased GABA levels and disrupted balance between GABA and glutamate in the anterior cingulate cortex may contribute to our understanding the pathophysiology of and treatment for addictions,” Dr. Seo said.


  2. Screen time before bed linked with less sleep, higher BMIs in kids

    by Ashley

    From the Penn State press release:

    It may be tempting to let your kids stay up late playing games on their smartphones, but using digital devices before bed may contribute to sleep and nutrition problems in children, according to Penn State College of Medicine researchers.

    After surveying parents about their kids’ technology and sleep habits, researchers found that using technology before bed was associated with less sleep, poorer sleep quality, more fatigue in the morning and — in the children that watched TV or used their cell phones before bed — higher body mass indexes (BMI).

    Caitlyn Fuller, medical student, said the results — published in the journal Global Pediatric Health — may suggest a vicious cycle of technology use, poor sleep and rising BMIs.

    “We saw technology before bed being associated with less sleep and higher BMIs,” Fuller said. “We also saw this technology use being associated with more fatigue in the morning, which circling back, is another risk factor for higher BMIs. So we’re seeing a loop pattern forming.”

    Previous research has found associations between more technology use and less sleep, more inattention, and higher BMIs in adolescents. But even though research shows that 40 percent of children have cell phones by fifth grade, the researchers said not as much was known about the effects of technology on a younger population.

    Fuller said that because sleep is so critical to a child’s development, she was interested in learning more about the connection between screen time right before bed and how well those children slept, as well as how it affected other aspects of their health.

    The researchers asked the parents of 234 children between the ages of 8 and 17 years about their kids’ sleep and technology habits. The parents provided information about their children’s’ technology habits, sleep patterns, nutrition and activity. The researchers also asked the parents to further specify whether their children were using cell phones, computers, video games or television during their technology time.

    After analyzing the data, the researchers found several adverse effects associated with using different technologies right before bed.

    “We found an association between higher BMIs and an increase in technology use, and also that children who reported more technology use at bedtime were associated with less sleep at night,” Fuller said. “These children were also more likely to be tired in the morning, which is also a risk factor for higher BMIs.”

    Children who reported watching TV or playing video games before bed got an average of 30 minutes less sleep than those who did not, while kids who used their phone or a computer before bed averaged an hour less of sleep than those who did not.

    There was also an association between using all four types of technology before bed and increased cell phone use at night, such as waking up to text someone, with watching TV resulting in the highest odds.

    Fuller said the results support new recommendations from the American Academy of Pediatrics (AAP) about screen time for children. The AAP recommends that parents create boundaries around technology use, such as requiring their kids to put away their devices during meal times and keeping phones out of bedrooms at night.

    Dr. Marsha Novick, associate professor of pediatrics and family and community medicine, said that while more research is needed to determine whether multiple devices at bedtime results in worse sleep than just one device, the study can help pediatricians talk to parents about the use of technology.

    “Although there are many benefits to using technology, pediatricians may want to counsel parents about limiting technology for their kids, particularly at bedtime, to promote healthy childhood development and mental health,” Novick said.


  3. Study suggests microblogging may help reduce negative emotions for people with social anxiety

    December 5, 2017 by Ashley

    From the Society for Consumer Psychology press release:

    Have you ever wanted to tell someone about a tough day at work or scary medical news, but felt nervous about calling a friend to share what’s going on?

    Findings from a new study suggest that people who feel apprehensive about one-on-one interactions are taking advantage of a new form of communication that may help regulate emotions during times of need: online social networks. The study is available online in the Journal of Consumer Psychology.

    “When people feel badly, they have a need to reach out to others because this can help reduce negative emotions and restore a sense of well-being,” says Eva Buechel, a professor in the business school at the University of South Carolina. “But talking to someone face-to-face or on the phone might feel daunting because people may worry that they are bothering them. Sharing a status update on Facebook or tweet on Twitter allows people to reach out to a large audience in a more undirected manner.”

    Sharing short messages to an audience on a social network, called microblogging, allows people to reach out without imposing unwanted communication on someone who might feel obligated to respond. Responses on online social networks are more voluntary. To test whether people are more likely to microblog when they feel socially apprehensive, Buechel asked participants in one group to write about a time when they had no one to talk to at a party, while the control group wrote about office products.

    Then she asked the participants who had an online social network account to log in and spend two minutes on their preferred social network. When the time ended, she asked people if they had microblogged. The results showed that those who had been led to feel socially apprehensive were more likely to microblog.

    To explore who is more likely to microblog, Buechel conducted another experiment in which one group of participants watched a clip from the movie “Silence of the Lambs,” while the control group watched clips of pictures from space. Then they answered questions about how likely they were to express themselves in three different forms of communication: microblogging, in person or direct message (a private online message to an individual). Finally, she asked people to answer a series of questions that measured their level of social anxiety in a variety of situations.

    Buechel discovered that people who were higher on the social apprehension scale were more likely to microblog after they had experienced negative emotions (as a result of watching the “Silence of The Lambs” clip). People who were low on the social apprehension scale, however, were more interested in sharing face-to-face or via direct message after watching the scary clip.

    “There is a lot of research showing that sharing online is less ideal than having communication in person, but these social networks could be an important communication channel for certain individuals who would otherwise stay isolated,” she says.

    She acknowledges that there is a danger for those who start to rely on social media as their only form of communication, but when used wisely, microblogging can be a valuable means of buffering negative emotions though social interaction.


  4. Study suggests major life events shared on social media revive dormant connections

    by Ashley

    From the University of Notre Dame press release:

    Online social networking has revolutionized the way people communicate and interact with one another, despite idiosyncrasies we all love to hate — think top-10 lists of the most annoying people and habits on social media.

    However, there are specific advantages to using social media, beyond the simple joys — and occasional annoyances — of reconnecting and gossiping with old friends about babies, birthdays and baptisms.

    New research from the University of Notre Dame’s Mendoza College of Business examines the impact of major life events, such as getting married or graduating from college, on social network evolution, which, the study shows, has important implications for business practices, such as in marketing.

    “Who Cares About Your Big Day? Impact of Life Events on Dynamics of Social Networks,” forthcoming in Decision Sciences by Hong Guo, associate professor of business analytics, and Sarv Devaraj, professor of business, (along with Arati Srinivasan of Providence College), shows that major life events not only get more social media attention overall, but also bring long dormant connections back into social interaction.

    The researchers specifically focus on two key characteristics of individuals’ social networks: indegree of ties and relational embeddedness. Indegree is the number of ties directed to an individual. Those with high indegree centrality are assumed to be the most popular, prestigious and powerful people in a network due to the many connections that they have with others.

    “We find that the indegree of ties increases significantly following a major life event, and that this impact is stronger for more active users in the network,” Guo says. “Interestingly, we find that the broadcast of major life events helps to revive dormant ties as reflected by a decrease in embeddedness following a life event.”

    Relational embeddedness is the extent to which a user communicates with only a subset of partners. Social networking sites allow users to manage a larger network of weak ties and at the same time provide a mechanism for the very rapid dissemination of information pertaining to important life events such as engagements, weddings or births.

    “We show that major events provide an opportunity for users to revive communication with their dormant ties while simultaneously eliciting responses or communication from a user’s passive or weak ties,” Guo says. “Increased communication with weak ties thereby reduces the extent of embeddedness. We also find that one-time life events, such as weddings, have a greater impact than recurring life events like birthdays on the evolution of individuals’ social networks.”

    So why does this matter outside of our social media circles?

    “Knowing this, advertisers may better target their ads to major life events. For example, a travel agent marketing a honeymoon package can target a user who has shared that they just got married,” Guo says. “From the social networking sites’ perspective, various design features may be set up to enable and entice users to better share their life events, like how Facebook helps friends promote birthdays.”


  5. Videogame study suggests link between intelligence and skill at game

    November 27, 2017 by Ashley

    From the University of York press release:

    Researchers at the University of York have discovered a link between young people’s ability to perform well at two popular video games and high levels of intelligence.

    Studies carried out at the Digital Creativity Labs (DC Labs) at York found that some action strategy video games can act like IQ tests. The researchers’ findings are published today in the journal PLOS ONE.

    The York researchers stress the studies have no bearing on questions such as whether playing computer games makes young people smarter or otherwise. They simply establish a correlation between skill at certain online games of strategy and intelligence.

    The researchers focused on ‘Multiplayer Online Battle Arenas’ (MOBAs) — action strategy games that typically involve two opposing teams of five individuals — as well as multiplayer ‘First Person Shooter’ games. These types of games are hugely popular with hundreds of millions of players worldwide.

    The team from York’s Departments of Psychology and Computer Science carried out two studies. The first examined a group of subjects who were highly experienced in the MOBA League of Legends — one of the most popular strategic video games in the world with millions of players each day.

    In this study, the researchers observed a correlation between performance in the strategic game League of Legends and performance in standard paper-and-pencil intelligence tests.

    The second study analysed big datasets from four games: Two MOBAs (League of Legends and Defence of the Ancients 2 (DOTA 2)) and two ‘First Person Shooters’ (Destiny and Battlefield 3). First Person Shooters (FPSs) are games involving shooting enemies and other targets, with the player viewing the action as though through the eyes of the character they are controlling.

    In this second study, they found that for large groups consisting of thousands of players, performance in MOBAs and IQ behave in similar ways as players get older. But this effect was not found for First Person Shooters, where performance declined after the teens.

    The researchers say the correlation between ability at action strategy video games such as League of Legends and Defence of the Ancients 2 (DOTA 2) and a high IQ is similar to the correlation seen in other more traditional strategy games such as chess.

    Corresponding author Professor Alex Wade of the University of York’s Department of Psychology and Digital Creativity Labs said: “Games such as League of Legends and DOTA 2 are complex, socially-interactive and intellectually demanding. Our research would suggest that your performance in these games can be a measure of intelligence.

    “Research in the past has pointed to the fact that people who are good at strategy games such as chess tend to score highly at IQ tests. Our research has extended this to games that millions of people across the planet play every day.”

    The discovery of this correlation between skill and intelligence opens up a huge new data source. For example, as ‘proxy’ tests of IQ, games could be useful at a global population level in fields such as ‘cognitive epidemiology’ — research that examines the associations between intelligence and health across time — and as a way of monitoring cognitive health across populations.

    Athanasios Kokkinakis, a PhD student with the EPSRC Centre for Intelligent Games and Game Intelligence (IGGI) research programme at York, is the lead author on the study.

    He said: “Unlike First Person Shooter (FPS) games where speed and target accuracy are a priority, Multiplayer Online Battle Arenas rely more on memory and the ability to make strategic decisions taking into account multiple factors.

    “It is perhaps for these reasons that we found a strong correlation between skill and intelligence in MOBAs.”

    Co-author Professor Peter Cowling, Director of DC Labs and the IGGI programme at York, said: “This cutting-edge research has the potential for substantial impact on the future of the games and creative industries — and on games as a tool for research in health and psychology.

    “The IGGI programme has 48 excellent PhD students working with industry and across disciplines — there is plenty more to come!”


  6. Study suggests punctuation in text messages helps replace cues found in face-to-face conversations

    November 25, 2017 by Ashley

    From the Binghamton University press release:

    Emoticons, irregular spellings and exclamation points in text messages aren’t sloppy or a sign that written language is going down the tubes — these “textisms” help convey meaning and intent in the absence of spoken conversation, according to newly published research from Binghamton University, State University of New York.

    “In contrast with face-to-face conversation, texters can’t rely on extra-linguistic cues such as tone of voice and pauses, or non-linguistic cues such as facial expressions and hand gestures,” said Binghamton University Professor of Psychology Celia Klin. “In a spoken conversation, the cues aren’t simply add-ons to our words; they convey critical information. A facial expression or a rise in the pitch of our voices can entirely change the meaning of our words.”

    “It’s been suggested that one way that texters add meaning to their words is by using “textisms” — things like emoticons, irregular spellings (sooooo) and irregular use of punctuation (!!!).”

    A 2016 study led by Klin found that text messages that end with a period are seen as less sincere than text messages that do not end with a period. Klin pursued this subject further, conducting experiments to see if people reading texts understand textisms, asking how people’s understanding of a single-word text (e.g., yeah, nope, maybe) as a response to an invitation is influenced by the inclusion, or absence, of a period.

    “In formal writing, such as what you’d find in a novel or an essay, the period is almost always used grammatically to indicate that a sentence is complete. With texts, we found that the period can also be used rhetorically to add meaning,” said Klin. “Specifically, when one texter asked a question (e.g., I got a new dog. Wanna come over?), and it was answered with a single word (e.g., yeah), readers understood the response somewhat differently depending if it ended with a period (yeah.) or did not end with a period (yeah). This was true if the response was positive (yeah, yup), negative (nope, nah) or more ambiguous (maybe, alright). We concluded that although periods no doubt can serve a grammatical function in texts just as they can with more formal writing — for example, when a period is at the end of a sentence — periods can also serve as textisms, changing the meaning of the text.”

    Klin said that this research is motivated by an interest in taking advantage of a unique moment in time when scientists can observe language evolving in real time.

    “What we are seeing with electronic communication is that, as with any unmet language need, new language constructions are emerging to fill the gap between what people want to express and what they are able to express with the tools they have available to them,” said Klin. “The findings indicate that our understanding of written language varies across contexts. We read text messages in a slightly different way than we read a novel or an essay. Further, all the elements of our texts — the punctuation we choose, the way that words are spelled, a smiley face — can change the meaning. The hope, of course, is that the meaning that is understood is the one we intended. Certainly, it’s not uncommon for those of us in the lab to take an extra second or two before we send texts. We wonder: How might this be interpreted? ‘Hmmm, period or no period? That sounds a little harsh; maybe I should soften it with a “lol” or a winky-face-tongue-out emoji.'”

    With trillions of text messages sent each year, we can expect the evolution of textisms, and of the language of texting more generally, to continue at a rapid rate, wrote the researchers. Texters are likely to continue to rely on current textisms, as well to as create new textisms, to take the place of the extra-linguistic and nonverbal cues available in spoken conversations. The rate of change for “talk-writing” is likely to continue to outpace the changes in other forms of English.

    “The results of the current experiments reinforce the claim that the divergence from formal written English that is found in digital communication is neither arbitrary nor sloppy,” said Klin. “It wasn’t too long ago that people began using email, instant messaging and text messaging on a regular basis. Because these forms of communication provide limited ways to communicate nuanced meaning, especially compared to face-to-face conversations, people have found other tools.”

    Klin believes that this subject could be studied further.

    “An important extension would be to have a situation that more closely mimics actual texting, but in the lab where we can observe how different cues, such as punctuation, abbreviations and emojis, contribute to texters’ understanding,” she said. “We might also examine some of the factors that should influence texters’ understanding, such as the social relationship between the two texters or the topic of the text exchange, as their level of formality should influence the role of things like punctuation.”


  7. Study looks at language often used by people with ADHD on Twitter

    November 20, 2017 by Ashley

    From the University of Pennsylvania press release:

    What can Twitter reveal about people with attention-deficit/hyperactivity disorder, or ADHD? Quite a bit about what life is like for someone with the condition, according to findings published by University of Pennsylvania researchers Sharath Chandra Guntuku and Lyle Ungar in the Journal of Attention Disorders. Twitter data might also provide clues to help facilitate more effective treatments.

    “On social media, where you can post your mental state freely, you get a lot of insight into what these people are going through, which might be rare in a clinical setting,” said Guntuku, a postdoctoral researcher working with the World Well-Being Project in the School of Arts and Sciences and the Penn Medicine Center for Digital Health. “In brief 30- or 60-minute sessions with patients, clinicians might not get all manifestations of the condition, but on social media you have the full spectrum.”

    Guntuku and Ungar, a professor of computer and information science with appointments in the School of Engineering and Applied Science, the School of Arts and Sciences, the Wharton School and Penn Medicine, turned to Twitter to try to understand what people with ADHD spend their time talking about. The researchers collected 1.3 million publicly available tweets posted by almost 1,400 users who had self-reported diagnoses of ADHD, plus an equivalent control set that matched the original group in age, gender and duration of overall social-media activity. They then ran models looking at factors like personality and posting frequency.

    “Some of the findings are in line with what’s already known in the ADHD literature,” Guntuku said. For example, social-media posters in the experimental group often talked about using marijuana for medicinal purposes. “Our coauthor, Russell Ramsay, who treats people with ADHD, said this is something he’s observed in conversations with patients,” Guntuku added.

    The researchers also found that people with ADHD tended to post messages related to lack of focus, self-regulation, intention and failure, as well as expressions of mental, physical and emotional exhaustion. They often used words like “hate,” “disappointed,” “cry” and “sad” more frequently than the control group and often posted during hours of the day when the majority of people sleep, from midnight to 6 a.m.

    “People with ADHD are experiencing more mood swings and more negativity,” Ungar said. “They tend to have problems self-regulating.”

    This could partially explain why they enjoy social media’s quick feedback loop, he said. A well-timed or intriguing tweet could yield a positive response within minutes, propelling continued use of the online outlet.

    Using information gleaned from this study and others, Ungar and Guntuku said they plan to build condition-specific apps that offer insight into several conditions, including ADHD, stress, anxiety, depression and opioid addiction. They aim to factor in facets of individuals, their personality or how severe their ADHD is, for instance, as well as what triggers particular symptoms.

    The applications will also include mini-interventions. A recommendation for someone who can’t sleep might be to turn off the phone an hour before going to bed. If anxiety or stress is the major factor, the app might suggest an easy exercise like taking a deep breath, then counting to 10 and back to zero.

    “If you’re prone to certain problems, certain things set you off; the idea is to help set you back on track,” Ungar said.

    Better understanding ADHD has the potential to help clinicians treat such patients more successfully, but having this information also has a downside: It can reveal aspects of a person’s personality unintentionally, simply by analyzing words posted on Twitter. The researchers also acknowledge that the 50-50 split of ADHD to non-ADHD study participants isn’t true to life; only about 8 percent of adults in the U.S. have the disorder, according to the National Institute of Mental Health. In addition, people in this study self-reported an ADHD diagnosis rather than having such a determination come from a physician interaction or medical record.

    Despite these limitations, the researchers say the work has strong potential to help clinicians understand the varying manifestations of ADHD, and it could be used as a complementary feedback tool to give ADHD sufferers personal insights.

    “The facets of better-studied conditions like depression are pretty well understood,” Ungar said. “ADHD is less well studied. Understanding the components that some people have or don’t have, the range of coping mechanisms that people use — that all leads to a better understanding of the condition.”


  8. Researchers design a gait recognition method that can overcome intra-subject variations by view differences

    November 18, 2017 by Ashley

    From the Osaka University press release:

    Biometric-based person recognition methods have been extensively explored for various applications, such as access control, surveillance, and forensics. Biometric verification involves any means by which a person can be uniquely identified through biological traits such as facial features, fingerprints, hand geometry, and gait, which is a person’s manner of walking.

    Gait is a practical trait for video-based surveillance and forensics because it can be captured at a distance on video. In fact, gait recognition has been already used in practical cases in criminal investigations. However, gait recognition is susceptible to intra-subject variations, such as view angle, clothing, walking speed, shoes, and carrying status. Such hindering factors have prompted many researchers to explore new approaches with regard to these variations.

    Research harnessing the capabilities of deep learning frameworks to improve gait recognition methods has been geared to convolutional neural network (CNN) frameworks, which take into account computer vision, pattern recognition, and biometrics. A convolutional signal means combining any two of these signals to form a third that provides more information.

    An advantage of a CNN-based approach is that network architectures can easily be designed for better performance by changing inputs, outputs, and loss functions. Nevertheless, a team of Osaka University-centered researchers noticed that existing CNN-based cross-view gait recognition fails to address two important aspects.

    “Current CNN-based approaches are missing the aspects on verification versus identification, and the trade-off between spatial displacement, that is, when the subject moves from one location to another,” study lead author Noriko Takemura explains.

    Considering these two aspects, the researchers designed input/output architectures for CNN-based cross-view gait recognition. They employed a Siamese network for verification, where an input is a pair of gait features for matching, and an output is genuine (the same subjects) or imposter (different subjects) probability.

    Notably, the Siamese network architectures are insensitive to spatial displacement, as the difference between a matching pair is calculated at the last layer after passing through the convolution and max pooling layers, which reduces the gait image dimensionality and allows for assumptions to be made about hidden features. They can therefore be expected to have higher performance under considerable view differences. The researchers also used CNN architectures where the difference between a matching pair is calculated at the input level to make them more sensitive to spatial displacement.

    “We conducted experiments for cross-view gait recognition and confirmed that the proposed architectures outperformed the state-of-the-art benchmarks in accordance with their suitable situations of verification/identification tasks and view differences,” coauthor Yasushi Makihara says.

    As spatial displacement is caused not only by view difference but also walking speed difference, carrying status difference, clothing difference, and other factors, the researchers plan to further evaluate their proposed method for gait recognition with spatial displacement caused by other covariates.


  9. Study suggests ways to make email and other technology interruptions productive

    November 11, 2017 by Ashley

    From the Stephen J.R. Smith School of Business, Queen’s University press release:

    The average knowledge worker enjoys a measly five minutes of uninterrupted time and, once interrupted, half won’t even get back to what they were doing in the first place. Yet organizational expectations or social pressures make it hard to resist the urge to check incoming emails or text messages — pressing tasks be damned.

    Research suggests that such interruptions are not necessarily bad, and can even be productive.

    “I know from personal experience that some interruptions are actually good,” says Shamel Addas, an assistant professor of information systems at Smith School of Business. “It depends on the content and timing. You can get some critical information that will help, like completing your task. So that’s one of the assumptions I feel needs to be challenged and tested.”

    Addas has conducted several studies to learn more about the relationship between technology-related interruptions and performance.

    He found that interruptions that did not relate to primary activities undermined workers’ performance — they led to higher error rates, poorer memory, and lower output quality. It also took longer for workers to return to their primary work and complete their tasks. Such interruptions also had an indirect effect on performance by increasing workers’ stress levels.

    Email interruptions that related to workers’ primary activities increased stress levels as well but also boosted workers’ performance. Such interruptions were tied to mindful processing of task activities, which led to better performance both in terms of efficiency and effectiveness.

    Addas also discovered that the very features of the interrupting technology can influence the outcomes of the interruptions positively or negatively. Individuals who engaged in several email threads of conversations at the same time or those who kept getting interrupted by email but let the messages pile up in their in-box experienced higher stress levels and lower performance.

    And those who reprocessed their received messages during interruptions episodes or rehearsed their message responses before sending saw some benefits. Doing this enabled them to process their tasks more mindfully, Addas says, which boosted their performance.

    What does this mean for managers? For one thing, just recognizing that there are different types of interruptions, each with its own trade off, can help managers mitigate the negative impacts on performance and stress.

    Addas suggests that managers develop email management programs and interventions, such as specifying a time-response window for emails based on their urgency or relevance to primary activities. They can also establish periods of quiet time for uninterrupted work. And they can encourage work groups to develop effective coordination strategies to ensure one person’s interruptions do not adversely affect colleagues.

    As for individuals, they can start handling interruptions in batch rather than in real time to reduce the costs of switching back and forth between tasks and interruptions. To reduce stress from overload, Addas suggests, people should limit parallel exchanges during interruptions and delete or folder messages that are of limited use for their core work.

    “People might well consider thinking about the messages they construct and examining carefully their previously received messages as needed to ensure that they process their tasks more mindfully, which is beneficial for performance,” Addas says.

    Addas believes there are design implications to consider as well, particularly relating to context aware systems and email clients. Context aware systems know what kinds of tasks people are working on and can detect high and low periods of workload. “Email clients can be programmed to screen messages for task-relevant content and distinguish between incongruent and congruent interruptions,” he says. “They can then manipulate the timing at which each type of interruption is displayed to users, such as masking incongruent interruptions until a later time.”


  10. Study suggests removing digital devices from the bedroom can improve sleep for children, teens

    November 9, 2017 by Ashley

    From the Penn State press release:

    Removing electronic media from the bedroom and encouraging a calming bedtime routine are among recommendations Penn State researchers outline in a recent manuscript on digital media and sleep in childhood and adolescence.

    The manuscript appears in the first-ever special supplement on this topic in Pediatrics and is based on previous studies that suggest the use of digital devices before bedtime leads to insufficient sleep.

    The recommendations, for clinicians and parents, are:

      • 1. Make sleep a priority by talking with family members about the importance of sleep and healthy sleep expectations;

    2. Encourage a bedtime routine that includes calming activities and avoids electronic media use;

    3. Encourage families to remove all electronic devices from their child or teen’s bedroom, including TVs, video games, computers, tablets and cell phones;

    4. Talk with family members about the negative consequences of bright light in the evening on sleep; and

    5. If a child or adolescent is exhibiting mood or behavioral problems, consider insufficient sleep as a contributing factor.

    “Recent reviews of scientific literature reveal that the vast majority of studies find evidence for an adverse association between screen-based media consumption and sleep health, primarily delayed bedtimes and reduced total sleep duration,” said Orfeu Buxton, associate professor of biobehavioral health at Penn State and an author on the manuscript.

    The reasons behind this adverse association likely include time spent on screens replacing time spent sleeping; mental stimulation from media content; and the effects of light interrupting sleep cycles, according to the researchers.

    Buxton and other researchers are further exploring this topic. They are working to understand if media use affects the timing and duration of sleep among children and adolescents; the role of parenting and family practices; the links between screen time and sleep quality and tiredness; and the influence of light on circadian physiology and sleep health among children and adolescents.