1. Researchers create new tool that measures active learning in classrooms

    March 26, 2017 by Ashley

    From the San Francisco State University press release:

    Researchers at San Francisco State University have developed a tool that for the first time can measure the extent to which instructors use innovative teaching methods by analyzing simple audio recordings of classroom sounds, according to a study published in the Proceedings of the National Academy of Science.

    Researchers analyzed recordings of more than 1,486 sessions from 67 different courses using the tool, dubbed DART -decibel analysis for research in teaching.

    “The breakthrough here is that for the first time we can effectively and inexpensively measure the use of innovative teaching strategies that have previously been shown to produce better learning than lecture only,” said SF State Professor of Biology Kimberly Tanner, principal investigator on the study. Tanner’s research focuses on novel teaching strategies.

    “In my work, I’ve found that many faculty members want to improve their teaching, but they don’t have the tools to help them see how they’re doing,” said Tanner. “It’s like trying to lose weight without a scale — you can’t improve what you can’t measure. DART provides a simple, easy way to answer the question ‘How much of class time do I devote to engaging my students in active learning?'”

    The findings are based on a comprehensive SF State project that includes 83 community college and university instructors involved in examining and promoting innovative teaching methods.

    Tanner says DART can be used in any classroom, and at this time it’s free and can be accessed online at http://dart.sfsu.edu/. SF State researchers have secured a provisional patent for the technology and eventually plan to create an app.

    According to Tanner, traditional teaching often focuses on a lecture that’s delivered by a faculty member to a group of students. Modern educational research has shown that active learning – a term used to describe a variety of related methods where students interact with each other and engage in problem-solving activities – drives stronger learning and better educational outcomes than lecture alone.

    But widespread adoption of innovative teaching methods that foster increased learning has been slowed by the lack of a way to quantify how much they are really being used by instructors, Tanner said. For example, faculty members can overestimate how often they truly engage students. In addition, educational reform leaders and funding agencies do not currently have easy and efficient ways of monitoring if teaching changes are happening in real-world courses.

    As part of the SF State research project, faculty members affiliated with 22 colleges and universities recorded their classes, which ranged in size from 4 to 300 students, using standard audio recorders. At the same time, trained evaluators took notes about what happened in the classes and identified the various instructional methods used, including faculty lecture, small-group discussions and quiet problem solving.

    Researchers then used an optimized computer program to classify sound in the audio recordings. Using only the classroom sounds, DART could classify the audio into three categories — single voice (traditional lecture with question and answer), multiple voice (student interactive group work), or no voice (student thinking, writing or individual problem solving) — with over 90 percent accuracy, which matched the ability of the human evaluators to correctly classify the classroom environment. It wasn’t necessary for DART to classify the actual content of the recorded speech, so student and instructor privacy was protected. DART could do its work based solely on the overall level and type of noise in the classroom.

    “Although the initial research focused on biology classes, the DART method can be applied in almost any teaching situation,” said Melinda Owens, postdoctoral scholar and lecturer at SF State, one of three lead authors on the paper. “Just like a person can track their progress toward their daily steps with a fitness tracker, a faculty member could track their progress toward adopting active learning with the DART system.”

    “DART looks like a great new tool for solving the biggest outstanding question in undergraduate science education ? namely, what teaching methods are actually being used in college classrooms, and how can we routinely monitor those. Before this work, it appeared impossible to answer these critical questions,” said Nobel Prize-winning physicist and physics education researcher Carl Wieman of Stanford University. “This work now shows how to do that quite easily. I am surprised that this method is so effective at characterizing the teaching taking place, but the massive scale of the analysis and the care in which it was carried out are very convincing.”


  2. In learning, every moment counts

    March 21, 2017 by Ashley

    From the Friedrich Schiller University Jena press release:

    Psychologist at University of Jena uncovers strong variability in motivation in learning situations: In the current issue of the journal Learning & Instruction Dr Julia Dietrich of Friedrich Schiller University (FSU) published her findings together with Jaana Viljaranta (University of Eastern Finland), Julia Moeller (Yale University) and Bärbel Kracke (FSU) on students’ expectations and efforts.

    Motivation can be a fickle thing — if we are motivated, we can achieve a great deal, but if motivation is lacking, we are easily overwhelmed. And we all know people who give the impression of being highly motivated all the time, while others seem to be chronically lacking in drive.

    Just how variable individual motivation can be is the subject of research by Dr Julia Dietrich of Friedrich Schiller University in Jena (FSU), and she has published her findings in the current issue of the journal ‘Learning & Instruction’. Together with Jaana Viljaranta (University of Eastern Finland), Julia Moeller (Yale University) and Bärbel Kracke (FSU), she investigated students’ expectations and efforts.

    “It is known that motivation is an important factor for learning and performance, but research has so far been relatively general,” explains psychologist Dr Dietrich. To date, studies have primarily recorded how motivated people are in general and what drives them. “However, until now no one has studied the state of an individual’s motivation in a specific, time-limited situation, such as during a lecture or lesson at school,” she adds.

    Three snapshots in a lecture

    In order to determine this, during one semester 155 student teachers recorded their motivation three times within 90-minute lectures. “To do this they had to answer questions, which were always the same, during 10 lectures on Educational Psychology, either using their smartphone or on paper. Among other things, we wanted to know how competent they felt at that particular moment, whether they understood the material or found it a strain to follow the lecture. They were also asked whether they enjoyed the content of the lecture and whether they found it useful,” explains Dietrich.

    Even the researchers were astounded by the results of the non-representative study, because motivation fluctuated much more strongly during the 90 minutes than had previously been assumed. During a lecture, every single participant experienced phases of high motivation and of strong demotivation — completely independently of the other students in relation to the timing of those phases. “Interests are of course specific to individuals. So far, at any rate, we have been unable to detect any systematic trends such as particular materials or topics that caused motivation to rise or fall in all participants,” reports Dietrich. “The causes for the fluctuations need to be considered more carefully in future, in order to make learning contexts as a whole more motivating.”

    The study was also able to show how closely intertwined motivation and effort are. The more effort one makes, the more motivated one feels. The reverse is also true: “A person who is motivated also makes more effort,” the 33-year-old psychologist explains.

    Deriving recommendations for teachers

    According to Dietrich, the crucial thing is to recognise that every learning situation and every moment counts: lecturers can ‘lose’ students at any time as they address them in the lecture theatre, but they can also win them back again. In spite of differences between individuals, the task is now to develop initial practical recommendations for teacher training as regards, for example, content, teaching methods or the use of materials. In addition, the investigation demonstrates how teaching staff can obtain immediate evaluations of changes to course content or new methods, instead of having to wait until the end of a semester for an evaluation using a questionnaire.

    Julia Dietrich studied Psychology in her home town of Erfurt, Germany, and obtained a PhD in Developmental Psychology from the University of Erfurt in 2010. She subsequently spent two years at the University of Helsinki. Since 2013 she has been a member of Prof. Bärbel Kracke’s team at the University of Jena’s Department of Educational Psychology. Here, Dietrich will be doing further research on the ‘dark side’ of motivation. Giving an insight into the team’s future work, she says: “There are people who are very motivated and perform very well, but find it a great effort. Investigating what it ‘costs’ them to study, so that they are not at risk of burn-out at some time, will be the aim of our future studies.”


  3. Precise technique tracks dopamine in the brain

    March 17, 2017 by Ashley

    From the MIT press release:

    brain scansMIT researchers have devised a way to measure dopamine in the brain much more precisely than previously possible, which should allow scientists to gain insight into dopamine’s roles in learning, memory, and emotion.

    Dopamine is one of the many neurotransmitters that neurons in the brain use to communicate with each other. Previous systems for measuring these neurotransmitters have been limited in how long they provide accurate readings and how much of the brain they can cover. The new MIT device, an array of tiny carbon electrodes, overcomes both of those obstacles.

    “Nobody has really measured neurotransmitter behavior at this spatial scale and timescale. Having a tool like this will allow us to explore potentially any neurotransmitter-related disease,” says Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study.

    Furthermore, because the array is so tiny, it has the potential to eventually be adapted for use in humans, to monitor whether therapies aimed at boosting dopamine levels are succeeding. Many human brain disorders, most notably Parkinson’s disease, are linked to dysregulation of dopamine.

    “Right now deep brain stimulation is being used to treat Parkinson’s disease, and we assume that that stimulation is somehow resupplying the brain with dopamine, but no one’s really measured that,” says Helen Schwerdt, a Koch Institute postdoc and the lead author of the paper, which appears in the journal Lab on a Chip.

    Studying the striatum

    For this project, Cima’s lab teamed up with David H. Koch Institute Professor Robert Langer, who has a long history of drug delivery research, and Institute Professor Ann Graybiel, who has been studying dopamine’s role in the brain for decades with a particular focus on a brain region called the striatum. Dopamine-producing cells within the striatum are critical for habit formation and reward-reinforced learning.

    Until now, neuroscientists have used carbon electrodes with a shaft diameter of about 100 microns to measure dopamine in the brain. However, these can only be used reliably for about a day because they produce scar tissue that interferes with the electrodes’ ability to interact with dopamine, and other types of interfering films can also form on the electrode surface over time. Furthermore, there is only about a 50 percent chance that a single electrode will end up in a spot where there is any measurable dopamine, Schwerdt says.

    The MIT team designed electrodes that are only 10 microns in diameter and combined them into arrays of eight electrodes. These delicate electrodes are then wrapped in a rigid polymer called PEG, which protects them and keeps them from deflecting as they enter the brain tissue. However, the PEG is dissolved during the insertion so it does not enter the brain.

    These tiny electrodes measure dopamine in the same way that the larger versions do. The researchers apply an oscillating voltage through the electrodes, and when the voltage is at a certain point, any dopamine in the vicinity undergoes an electrochemical reaction that produces a measurable electric current. Using this technique, dopamine’s presence can be monitored at millisecond timescales.

    Using these arrays, the researchers demonstrated that they could monitor dopamine levels in many parts of the striatum at once.

    “What motivated us to pursue this high-density array was the fact that now we have a better chance to measure dopamine in the striatum, because now we have eight or 16 probes in the striatum, rather than just one,” Schwerdt says.

    The researchers found that dopamine levels vary greatly across the striatum. This was not surprising, because they did not expect the entire region to be continuously bathed in dopamine, but this variation has been difficult to demonstrate because previous methods measured only one area at a time.

    How learning happens

    The researchers are now conducting tests to see how long these electrodes can continue giving a measurable signal, and so far the device has kept working for up to two months. With this kind of long-term sensing, scientists should be able to track dopamine changes over long periods of time, as habits are formed or new skills are learned.

    “We and other people have struggled with getting good long-term readings,” says Graybiel, who is a member of MIT’s McGovern Institute for Brain Research. “We need to be able to find out what happens to dopamine in mouse models of brain disorders, for example, or what happens to dopamine when animals learn something.”

    She also hopes to learn more about the roles of structures in the striatum known as striosomes. These clusters of cells, discovered by Graybiel many years ago, are distributed throughout the striatum. Recent work from her lab suggests that striosomes are involved in making decisions that induce anxiety.

    This study is part of a larger collaboration between Cima’s and Graybiel’s labs that also includes efforts to develop injectable drug-delivery devices to treat brain disorders.

    “What links all these studies together is we’re trying to find a way to chemically interface with the brain,” Schwerdt says. “If we can communicate chemically with the brain, it makes our treatment or our measurement a lot more focused and selective, and we can better understand what’s going on.”


  4. New studies illustrate how gamers get good

    by Ashley

    From the Brown University press release:

    We all know that practice makes us better at things, but scientists are still trying to understand what kinds of practice work best. A research team led by a Brown University computer scientist has found insights about how people improve their skills in a rather unlikely place: online video games.

    In a pair of studies reported in the journal Topics in Cognitive Science, researchers looked at data generated from thousands of online matches of two video games, the first-person shooter game Halo: Reach and the strategy game StarCraft 2. The Halo study revealed how different patterns of play resulted in different rates of skill development in players. The StarCraft study showed how elite players have unique and consistent rituals that appear to contribute to their success.

    “The great thing about game data is that it’s naturalistic, there’s a ton of it, and it’s really well measured,” said Jeff Huang, a computer science professor at Brown and the study’s lead author. “It gives us the opportunity to measure patterns for a long period of time over a lot of people in a way that you can’t really do in a lab.”

    Halo: Reach is a science fiction war game in which players battle with rifles, grenades and other weapons (part of a wildly popular series of Halo games). One of the most popular ways to play is known as Team Slayer, where online players from are placed together on teams for 10- to 15-minute matches to see which team can score the most kills against an opposing team. In order to arrange matches in which players have roughly similar skill levels, the game rates players using a metric called TrueSkill. TrueSkill ratings are constantly updated as players play more matches and their skill level changes, so they offered Huang and his colleagues the opportunity to see what kinds of playing habits influence a player’s skill acquisition.

    Huang and his colleagues looked at data generated by seven months of Halo matches — every online match played by the 3.2 million people who started playing the week the game was released in 2010.

    Perhaps unsurprisingly, the research showed that people who played the most matches per week (more than 64) had the largest increase in skill over time. But playing lots of games wasn’t the most efficient way to improve skill. Looking at the data another way — in terms of which groups showed the most improvement per match rather an over time — showed markedly different results. That analysis showed that, over their first 200 matches, those who played four to eight matches week gained the most skill per match, followed by those who played eight to 16 matches.

    “What this suggests is that if you want to improve the most efficiently, it’s not about playing the most matches per week,” Huang said. “You actually want to space out your activity a little bit and not play so intensively.”

    But breaks in activity shouldn’t be too long. The researchers also looked specifically at how breaks in play affect a player’s skill. Short breaks — one or two days — were no big deal, the study found. Players gained back lost skill over the course of the next match they played. But longer breaks were shown to have longer-term effects. After a 30-day break, for example, players took around 10 matches to get back the skill level they had before the break.

    The lesson from the study, Huang says, seems to be that moderation is a good thing in terms of learning efficiency, as long as breaks in play aren’t too long.

    The second study focused on the strategy game StarCraft 2. Like other strategy games, StarCraft requires players to actively manage hundreds of game units at the same time. Players must build bases and other infrastructure, manage economies, train soldiers and direct them in combat. Looking at data from hundreds of StarCraft matches, the study compared the habits of elite players with those of lesser skill.

    The study showed that one major difference between more skilled and less skilled players was the effective use of “hotkeys” — customized keyboard shortcuts that enable commands to be given quickly to unit groups. Less skilled players used hotkeys less, opting instead to point and click commands to individual units with a mouse. But all elite players made copious use of hotkeys, using them to issue up to 200 actions per minute during a typical match.

    But the important thing wasn’t just the fact that elite players use the hotkeys more, it’s that they form unique and consistent habits in how they use them. Those habits were so unique and consistent, in fact, that the researchers were able to identify specific players with more than 90 percent accuracy just by looking at their hotkey patterns. It’s likely, the researchers say, that those habits become almost second nature, enabling players to keep cool and issue commands when the pressure of the game ratchets up.

    The study also showed that elite players seem to “warm up” their hotkey use. Even in the very early stages of a match, when there are fewer units in play and fewer things happening in the game, elite players still scrolled rapidly through their hotkeys, often issuing meaningless dummy commands to various units.

    “They’re getting their minds and bodies into the routines that they’ll need when they’re at peak performance later in the game,” Huang said. “They’re getting themselves warmed up.”

    Beyond simply learning about what makes gamers good, Huang hopes the work will shed light more generally on the ways in which people can optimize their performance in other domains. For instance, perhaps warming up like StarCraft players do would be helpful for people who have jobs that require paying attention to lots of different things at once.

    “Air traffic controllers come to mind,” Huang said. “Maybe when someone first gets in the seat, they should take a few moments and re-enact what they do until they can get warmed up and in the zone.”

    The results of the Halo study echo the findings of other cognitive science work, Huang says, in suggesting that moderate activity with short breaks could be a good thing.

    “People have seen this for other things, like studying,” Huang said. “Cramming is generally regarded as less efficient than doing smaller bits of studying throughout the semester. I think we’re seeing something similar here in our study.”

    Taken together, the researchers write, the message from these studies seems to be, “practice consistently, stay warm.”


  5. Study finds new link between childhood abuse and adolescent misbehavior

    March 16, 2017 by Ashley

    From the University of Pittsburgh press release:

    An important learning process is impaired in adolescents who were abused as children, a University of Pittsburgh researcher has found, and this impairment contributes to misbehavior patterns later in life.

    Associative learning — the process by which an individual subconsciously links experiences and stimuli together — partially explains how people generally react to various real-world situations. In a newly released study, published in the Journal of Child Psychology and Psychiatry, Pitt Assistant Professor Jamie L. Hanson detailed the connection between impaired associative learning capacities and instances of early childhood abuse.

    “We primarily found that a poorer sense of associative learning negatively influences a child’s behavior patterns during complex and fast-changing situations. Having this knowledge is important for child psychologists, social workers, public policy officials and other professionals who are actively working to develop interventions,” said Hanson, who teaches in Pitt’s Department of Psychology within the Kenneth P. Dietrich School of Arts and Sciences with a secondary appointment in the University’s Learning Research and Development Center. “We have long known that there is a link between behavioral issues in adolescents and various forms of early life adversities. Yet, the connection isn’t always clear or straightforward. This study provides further insight into one of the many factors of how this complicated relationship comes to exist.”

    To uncover these relationships, researchers asked 81 adolescents between the ages of 12 and 17 to play computer games where the child had to figure out which set of visual cues were associated with a reward. Forty-one participants had endured physical abuse at a young age, while the remaining 40 served as a comparison group. The most important aspect of the test, said Hanson, was that the cues were probabilistic, meaning children did not always receive positive feedback.

    “The participants who had been exposed to early childhood abuse were less able than their peers to correctly learn which stimuli were likely to result in reward, even after repeated feedback,” said Hanson. “In life we are often given mixed or little to no feedback from our significant others, bosses, parents and other important people in our lives. We have to be able to figure out what might be the best thing to do next.”

    Hanson and his colleagues also observed that mistreated children were generally less adept at differentiating which behaviors would lead to the best results for them personally when interacting with others. Additionally, abused children displayed more pessimism about the likelihood of positive outcomes compared to the group who hadn’t been abused. Taken as a whole, these findings clarify the relationship between physical abuse and the aggressive and disruptive behaviors that often plague abused children well into the later stages of childhood.


  6. Video games can mitigate defensiveness resulting from bad test scores

    by Ashley

    From the Texas Tech University press release:

    One of the worst feelings a student can have is receiving a bad grade on an exam, whether it’s a test they prepared well for or didn’t prepare at all. The prevalence of video games in today’s society helps mitigate some of the effects felt by students from those low test scores by reaffirming their abilities in another area they deem important.

    Video game players can get temporarily lost in alternative worlds, whether it’s transforming into the ultimate fighting machine or the greatest athlete on the planet. But no matter the game, the goal is to find a way to put the empty feeling of the bad test at school behind him by reaffirming his excellence in his favorite video game. It’s a scene that plays out all across the country, and one that has received criticism at times for placing too much emphasis on the game and not enough on schoolwork.

    But John Velez, an assistant professor in the Department of Journalism & Electronic Media in the Texas Tech University College of Media & Communication, says that may not always be the case. In fact, his research suggests those who value video game success as part of their identity and received positive feedback on their video game play were more willing to accept the bad test score and consider the implications of it, something that is crucial for taking steps to change study habits and ensure they do better on future exams.

    Conversely, those who do not value success in video games but received positive video game feedback were actually more defensive to having performed poorly on a test. They were more likely to discredit the test and engage in self-protective behaviors. Regardless, the results seem to throw a wrench into the theory that video games and schoolwork are detrimental to each other.

    The key, however, is making sure those playing video games after a bad test are not doing it just as an escape, but making sure after playing video games they understand why they did badly on the test and what they need to do to perform better on the next one.

    “People always kind of talk about video-game play and schoolwork in a negative light,” Velez said. “They talk about how playing video games in general can take away from academic achievement. But for me and a lot of gamers, it’s just a part of life and we use it a lot of times to help get through the day and be more successful versus gaming to get away from life.

    “What I wanted to look into was, for people who identify as a gamer and identify as being good at games, how they can use playing video games after something like a bad exam to help deal with the implications of a bad exam, which makes it more likely they will think about the implications and accept the idea that, ‘OK, I didn’t do well on this exam and I need to do better next time.'”

    Negative results, positive affirmation Velez said past research suggests receiving negative feedback regarding a valued self-image brings about a defensive mechanism where people discredit or dismiss the source of the information. Conversely, the Self-Affirmation Theory says that affirming or bolstering an important self-image that is not related to the negative feedback can effectively reduce defensiveness.

    “If you’re in a bad mood, you can play a good game and get into a good mood,” Velez said. “But I wanted to go deeper and think about how there are times when you are in a bad mood but you are in a bad mood for a very specific reason. Just kind of ignoring it and doing something to get into a good mood can be bad. It would be bad if you go home and play a video game to forget about it and the next time not prepare better for the test or not think about the last time you did badly on a test.”

    For the research, Velez was interested in two types of people — those who identify as placing importance on good video game play and those who do not. How good they were at playing the game was not a factor, just that they identified as it being important to their identity or not.

    Participants in the research were administered a survey to assess their motivations for video-game play and the importance of video games to their identity. They were then given an intelligence test and were told the test was a strong measure of intelligence. Upon completing the test, participants were given either negative feedback on their performance or no feedback at all.

    That negative feedback naturally produces an amount of defensiveness for anyone regarding their performance, regardless of the importance they put on being successful at video games.

    Participants then played a generic shooting video game for 15 minutes that randomly provided positive or no feedback to the player, and players were told the game was an adequate test of their video-game playing skills. Participants then completed an online survey containing ratings of the intelligence test and self-ratings on intelligence.

    What Velez discovered was those who place importance on being successful at video games were less likely to be defensive about the poor performance on the intelligence test.

    “Defensiveness is really a bad thing a lot of times,” Velez said. “It doesn’t allow you to think about the situation and think about what you should have done differently. A lot of times people use it to protect themselves and ignore it or move on, which makes it likely the same thing is going to happen over and over again.”

    It’s the second discovery that Velez didn’t expect, the result where those who performed badly on the intelligence exam and don’t identify as video game players became even more defensive about their intelligence exam result. Instead, they were more likely to use the positive video game feedback as further evidence they are intelligent and the test is flawed or doesn’t represent their true intelligence.

    “That was like this double-edged sword that I didn’t realize I was going to find,” Velez said. “It was definitely unexpected, but once you think about it theoretically, it intuitively makes sense. After receiving negative information about yourself you instinctively start looking for a way to make yourself feel better and you usually take advantage of any opportunities in your immediate environment.”

    Changing behavior A common punishment administered by parents for inappropriate behavior or poor performance in school has been to take away things the child enjoys, such as television, the use of the car, or their video games.

    One might infer from this research that taking video games from the child might actually be doing them harm by not allowing them to utilize the tool that makes them feel better or gives them an avenue to understand why they performed poorly in school and how they must do better.

    Velez, however, said that’s not necessarily the case.

    “I don’t think parents should change their practices until more research is conducted, particularly looking at younger players and their parents’ unique parenting styles,” Velez said. “The study simply introduces the idea that some people may benefit from some game play in which they perform well, which may make it easier for them to discuss and strategize for the future so they don’t run into this problem again after playing.”

    Velez said the study also introduces specific stipulations about when the benefits of video-game play occur and when it may actually backfire.

    “If parents know their child truly takes pride in their video-game skills, then their child may benefit from doing well in their favorite game before addressing the negative test grade,” Velez said. “However, there’s the strong possibility that a child is using the video game as a way to avoid the implications of a bad test grade, so I wouldn’t suggest parents change how they parent their children until we’re able to do more research.”

    Therein lies the fine line, because the study also suggests receiving positive feedback on video games doesn’t necessarily translate into a better performance on a future exam. Velez said the common idea is that defensiveness prevents people from learning and adapting from the feedback they received. Those who are less defensive about negative self-information are more likely to consider the causes and precursors of the negative event, making it more likely a change in behavior will occur. But this was not a focus of this particular study and will have to be examined further.

    Velez said he would also like to identify other characteristics of video-game players who are more likely to benefit from this process compared to increased negative defensive reaction. This could be used to help identify a coping strategy or lead to further research about parenting strategies for discussing sensitive subjects with children.

    “What I want to get out of this research is, for people who care about gaming as part of their identity, how they can use video games in a positive way when dealing with negative things in life,” Velez said.


  7. Harnessing ADHD for business success

    March 15, 2017 by Ashley

    From the Technical University of Munich (TUM) press release:

    The symptoms of ADHD foster important traits associated with entrepreneurship. That conclusion was reached in a study conducted by an international team of economists, who found that entrepreneurs with ADHD embrace new experiences and demonstrate passion and persistence. Their intuitive decision making in situations involving uncertainty was seen by the researchers as a reason for reassessing existing economic models.

    Poor concentration, hyperactivity, a lack of self-regulation — at first glance, the symptoms of ADHD would seem to lower performance. On the other hand, successful entrepreneurs are frequently reported to have ADHD. “We noticed sometime that some symptoms of ADHD resemble behaviors commonly associated with entrepreneurship — in a positive sense,” says Prof. Holger Patzelt of the Entrepreneurship Research Institute at the Technical University of Munich (TUM).

    In cooperation with Johan Wiklund, professor at the Syracuse University, and Dimo Dimov, professor at the University of Bath, Patzelt asked 14 self-employed people with ADHD about their diagnoses, their careers and their personal background. The study shows that important symptoms of ADHD had a decisive impact on the subjects’ decision to go into business and on their entrepreneurial approach:

    Impulsiveness

    People with ADHD are quick to lose their patience. Several of the participants in the study cited boredom in their previous jobs as a reason for setting up their own company, where they could follow up on their own ideas whenever they wanted. One woman reported that she had introduced 250 new products within just a few years. In situations that would be highly stressful for others, such as difficult meetings with important customers, many of those surveyed felt at ease and stimulated. “Their impulsiveness, resulting from ADHD, gives them the advantage of being able to act under unforeseen circumstances without falling into anxiety and paralysis,” says Patzelt.

    Most of those surveyed act without thinking, even when making far-reaching decisions. One of the entrepreneurs described buying a friend’s company over lunch. He only learned of the friend’s plan to retire during the meal. Other participants reported that they make investments with no strategy and commit large sums of money to projects with highly uncertain outcomes. Some entrepreneurs believe that this kind of quick decision making is the only way to be productive, and are willing to live with setbacks as a result. Some have difficulty coping with structured activities.

    “A marked willingness to try out new things and take risks is an important entrepreneurial trait,” says Patzelt. However, the respondents’ impulsive actions led to success only when they focused on activities essential to the development of their businesses. One disadvantage of their impulsiveness was mentioned by all of them: problems with routine tasks such as bookkeeping.

    Hyperfocus

    When people with ADHD have a strong interest in a task, they display an unusual level of concentration known as hyperfocus. One entrepreneur reported that he often becomes completely absorbed in crafting customer solutions. Another constantly keeps up with the new technologies in his industry to the point that he is now much in demand as an expert. “With their passion and persistence, and the expertise they acquire as a result, entrepreneurs can gain a substantial competitive advantage,” says Patzelt.

    High activity level

    Many of the entrepreneurs in the study work day and night without taking time off. That is due to the their hyperfocus, but also to the physical restlessness associated with ADHD. The entrepreneurs use this to fuel their workload. As their energy levels are not constant throughout the day, an advantage in running their own businesses is that they can set their own hours.

    “Logic of people with ADHD symptoms is better attuned to entrepreneurial action.”

    Summing up the results, Patzelt says, “ADHD was a key factor in their decision to go into business for themselves and decisively impacted important entrepreneurial traits: risk taking, passion, persistence and time commitment. Impulsiveness has a special role to play. For People with ADHD it is okay to make intuitive decisions even if the results are bad.”

    Although one third of those surveyed failed in their business ventures or had little success, Patzelt sees the results of the study as vital for prompting a reassessment of prevailing assumptions in entrepreneurship research: “The way we evaluate entrepreneurial decisions is largely based on rationality and good outcomes. In view of the multitude of uncertainties, however, can such decisions always be rational? People with ADHD show us a different logic that is perhaps better suited to entrepreneurship.”

     


  8. Alcoholism may be caused by dynamical dopamine imbalance

    by Ashley

    From the National Research University Higher School of Economics press release:

    Researchers from the Higher School of Economics, Ecole Normale Supérieure, Paris, Indiana University and the Russian Academy of Sciences Nizhny Novgorod Institute of Applied Physics have identified potential alcoholism mechanisms, associated with altered dopaminergic neuron response to complex dynamics of prefrontal cortex neurons affecting dopamine release.

    Interacting neuronal populations in the cerebral cortex generate electrical impulses (called action potentials), that are characterized by specific spatial and temporal patterns of neural firing (or complex neural dynamics). These firing patterns depend on the intrinsic properties of individual neurons, on the neural network connectivity and the inputs to these circuits. Taken as a basis for this computational study is the experimental evidence for a specific population of prefrontal cortex neurons that connects via excitatory synapses to dopaminergic and inhibitory ventral tegmental area (VTA) neurons. Thus, the structure of neural firing in the prefrontal cortex can directly affect dopamine cell response and dopamine release.

    Boris Gutkin leads the Theoretical Neuroscience Group at the HSE Centre for Cognition and Decision Making. One of the group’s research areas focuses on neurobiological processes leading to substance abuse and addiction — specifically, on detecting links between the neurobiological mechanisms of a drug’s action and observable behavioural reactions. In particular, the researchers use mathematical modelling to examine specific characteristics of dopaminergic neuron firing patterns and dynamics which can lead to addiction.

    Dopamine, a neurotransmitter released by dopaminergic neurons in the brain, is a chemical which plays a key role in the internal brain reward system that drives learning of motivated behavior. By acting within the reward systems in the brain (e.g. the ventral tegmental area found deep in the mid-brain; the striatum, responsible for selecting correct actions and the prefrontal cortex that controls voluntary goals and behaviors), it signals either unexpected reward or anticipation of reward resulting from a particular action or event. Thus, dopamine provides positive reinforcement of behaviours that lead to these rewards, causing them to be repeated. Conversely, where a particular action fails to produce the expected positive effect or is followed by an unpleasant event, dopamine release decreases sharply, leading to frustration and an unwillingness to repeat the behaviour in question.

    Many dopamine neurons produce these learning signals by emitting rapid bursts of spikes when the animal receives more reward than expected or pausing when there is less than expected. In order to govern the learning correctly, the number of bursts (and the dopamine released) must be proportional to the discrepancy between the received and the expected reward (for example if one expects to get 50 euros for his work, but gets 100; the dopamine activity should be proportional to 50; when one expects 50 but gets 500; the activity should signal a number proportional to 450). Hence the bigger the mismatch — the stronger the response. Yet another subgroup of dopamine neurons simply signals when stimuli are important for behaviour or not giving binary all or none responses. These binary signals then drive orienting or approach to the important behaviors. So the two dopamine cell populations have different response modes: analogue learning signal or all-or-none important alert.

    Two Modes of neuron Activity

    Recent research by Gutkin’s group conducted jointly with scientists from Indiana University (Alexey Kuznetsov, Mathematics and Christopher Lapish, Neuroscience) and the RAN Institute of Applied Physics (Denis Zakharov) suggests potential mechanisms of alcohol’s effect on dopaminergic neuronal activity. Their paper ‘Dopamine Neurons Change the Type of Excitability in Response to Stimuli’ published in PLOS features a computational model of dopamine (DA) neuron activity, describing its key properties and demonstrating that the DA neuron’s response mode can vary depending on the pattern of the synaptic input (including that from the prefrontal cortex).

    When in the first mode, the amount of dopamine released by the DA neurons reflects the learning signal propotional to the difference between what an animal or human expects and what they actually receive as a result of a certain action. When in the second DA neuron mode, dopamine release serves as a reference binary signal indicating whether or not a certain event is important. Hence the results of the computational study imply that dopamine neurons may not be two distinct populations, but are capable to move flexibly from one response mode to another depending on the nature of the signals they receive.

    In a related study, ‘Contribution of synchronized GABAergic neurons to dopaminergic neuron firing and bursting’, published in the Journal of Neurophysiology the same group suggests that in addition to direct links between DA and prefrontal cortex neurons, indirect neural inputs from the prefrontal cortex via inhibitory (GABAergic) VTA neurons should be considered. In particular, the researchers found that signals from the prefrontal cortex can cause GABAergic neurons to synchronise, producing a strong inhibitory effect on DA neurons. The study found that in some cases, such inhibitory effects can lead to paradoxical results: instead of suppressing DA neuron firing and thus decreasing dopamine release, they can multiply DA firing frequency leading to higher dopamine release and positive reinforcement.

    What It Means for Our Understanding of Alcoholism

    Experimental evidence suggests that alcohol is capable of modifying DA neuron firing patterns, both indirectly via prefrontal cortex and inhibitory VTA neurons, and directly by acting on DA neurons per se. Based on Gutkin and his collaborators findings, one can hypothesise what mechanisms may be involved.

    The VTA has about 20,000 DA neurons, in someone who is not alcoholic, some of these serve to signal that a certain stimulus has importance, while the rest transmit the error signal. A certain balance between the two types of signals is essential for good judgment and proper behaviour. Alcohol disrupts the balance by changing both the pattern of neural activity in the prefrontal cortex and DA neuron properties. This change may bias more neurons to signal importance as opposed to the error. So, under alcohol influence, any stimulus associated with alcohol is treated by DA neurons as having behavioural and motivational importance, regardless of whether or not it matches the anticipated outcome, while in the absence of alcohol, neural firing would normally be consistent with the expected and received reinforcements.

    This effect may be the reason why alcoholics may eventually develop a narrower than normal range of behavioural responses, dooming them to seek to use alcohol. In doing so, they are either unaware of potential consequences of their actions or, even if they can anticipate such consequences, this awareness has little or no effect on their behaviour. According to surveys, most alcoholics understand that they may lose their home and family and even die from binge drinking, but this rarely stops them. To properly assess the consequences of drinking, their prefrontal cortex needs to integrate and learn to properly represent the negative expectations from this behaviour, supported by reinforcement learning signals from DA neurons. This may not happen, however, because alcohol (like other mood-altering substances) can affect both the neural activity in the addict’s prefrontal cortex and their DA neurons directly, blocking the learning.

    Finding a way to balance out the dopamine function in the addicted brain and to elicit adequate neural responses to environmental stimuli even under the influence could offer hope to people with substance abuse problems.


  9. Protein called GRASP1 is needed to strengthen brain circuits

    by Ashley

    From the Johns Hopkins University press release:

    Learning and memory depend on cells’ ability to strengthen and weaken circuits in the brain. Now, researchers at Johns Hopkins Medicine report that a protein involved in recycling other cell proteins plays an important role in this process.

    Removing this protein reduced mice’s ability to learn and recall information. “We see deficits in learning tasks,” says Richard Huganir, Ph.D., professor and director of the neuroscience department at the Johns Hopkins University School of Medicine.

    The team also found mutations in the gene that produces the recycling protein in a few patients with intellectual disability, and those genetic errors affected neural connections when introduced into mouse brain cells. The results, reported in the March 22 issue of Neuron, suggest that the protein could be a potential target for drugs to treat cognitive disorders such as intellectual disability and autism, Huganir says.

    The protein, known as GRASP1, short for GRIP-associated protein 1, was previously shown to help recycle certain protein complexes that act as chemical signal receptors in the brain. These receptors sit on the edges of neurons, and each cell continually shuttles them between its interior and its surface. By adjusting the balance between adding and removing available receptors, the cell fortifies or weakens the neural connections required for learning and memory.

    Huganir says most previous research on GRASP1 was conducted in laboratory-grown cells, not in animals, while the new study was designed to find out what the protein does at the behavioral level in a living animal.

    To investigate, his team genetically engineered so-called knockout mice that lacked GRASP1 and recorded electrical currents from the animals’ synapses, the interfaces between neurons across which brain chemical signals are transmitted. In mice without GRASP1, neurons appeared to spontaneously fire an average of 28 percent less frequently than in normal mice, suggesting that they had fewer synaptic connections.

    Next, Huganir’s team counted protrusions on the mice’s brain cells called spines, which have synapses at their tips. The average density of spines in knockout mice was 15 percent lower than in normal mice, perhaps because defects in receptor recycling had caused spines to be “pruned” or retracted. Neurons from mice without GRASP1 also showed weaker long-term potentiation, a measure of synapse strengthening, in response to electrical stimulation.

    The team then tested the mice’s learning and memory. First, the animals were placed in a tub of milky water and trained to locate a hidden platform. The normal mice needed five training sessions to quickly find the platform in the opaque water, while the knockout mice required seven; the next day, the normal mice spent more time swimming in that location than in other parts of the tub, but the knockout mice seemed to swim around randomly.

    Second, the mice were put in a box with light and dark chambers and given a slight shock when they entered the dark area. The next day, the normal mice hesitated for an average of about four minutes before crossing into the dark chamber, while the knockout mice paused for less than two minutes. “Their memory was not quite as robust,” Huganir says.

    To assess the importance of GRASP1 in humans, the team identified two mutations in the gene that produce the protein in three young male patients with intellectual disabilities, who had an IQ of less than 70 and were diagnosed at an early age. When the researchers replaced the rodent version of the normal GRASP1 gene with the two mutated mouse versions in mouse brain cells, the spine density decreased by 11 to 16 percent, and the long-term potentiation response disappeared.

    Huganir speculates that defects in GRASP1 might cause learning and memory problems because the cells aren’t efficiently recycling receptors back to the surface. Normally, GRASP1 attaches to traveling cellular compartments called vesicles, which carry the receptors, and somehow helps receptors get transferred from ingoing to outgoing vesicles.

    When Huganir’s team introduced GRASP1 mutations into mouse cells, receptors accumulated inside recycling compartments instead of being shuttled to the surface.

    Huganir cautions that the results don’t prove that the GRASP1 mutations caused the patients’ intellectual disability. But the study may encourage geneticists to start testing other patients for mutations in this gene, he says. If more cases are found, researchers may be able to design drugs that target the pathway. Huganir’s team is now studying GRASP1’s role in the receptor recycling process in more detail.


  10. Dopamine neurons factor ambiguity into predictions enabling us to ‘win big and win often’

    by Ashley

    From the Cold Spring Harbor Laboratory press release:

    In the struggle of life, evolution rewards animals that master their circumstances, especially when the environment changes fast. If there is a recipe for success, it is not: savor your victories when you are fortunate to have them. Rather it is: win big, and win often.

    To make winning decisions, animals cannot consult a hard-wired playbook or receive helpful advice. Instead, success depends on their ability to learn, from triumphs and from mistakes.

    Today in the journal Current Biology, a team led by Professor Adam Kepecs of Cold Spring Harbor Laboratory (CSHL) adds an important new dimension to our understanding of how the brain learns, focusing specifically on situations in which perceptually ambiguous information is used to make predictions.

    What if it’s gotten foggy and we can barely see the sign up ahead, or the announcement over the loudspeaker is garbled? In circumstances like these, how does the brain adjust the predictions that inform decisions?

    Kepecs and colleagues describe how dopamine-releasing neurons, which are understood to produce critical teaching signals for the brain, weigh the ambiguity of sensory information when they assess how successfully past experiences have guided a new decision. Their findings indicate that these neurons are even more sophisticated than previously thought.

    “Dopamine neurons are not just any neurons in the brain,” Kepecs says. “They are neurons that sit in the midbrain and send connections to large swaths of the brain. Dopamine neurons compare the predicted outcomes to actual outcomes, and send the discrepancy between these as an error feedback to other parts of the brain. This is exactly what you would want learning signals to do. And we’ve found that what dopamine neurons do is in perfect agreement with what is theoretically required for a powerful learning algorithm.”

    This is such an effective way of learning that the same strategy, called reinforcement learning, has been incorporated into many types of artificial intelligence, allowing computers to figure out how to do things without instructions from a programmer.

    Kepecs and his team added what computer scientists call a belief state (a probability distribution variable) to a mathematical model of reinforcement learning. The previously used model is good at explaining how an animal learns from experience — but only when the sensory stimuli available to the animal are straightforward, easy to interpret. “It’s been unclear whether or how dopamine neurons factor perceptual ambiguity into their teaching signal,” Kepecs says. “Our revised model generates an estimate of the probability that a given choice is correct — this is the degree of confidence about the decision.”

    “If you can evaluate your decision process — the noise and everything else that’s contributed to it — and in addition to making a binary choice assign a graded level of confidence to the decision, it’s essentially a prediction about your accuracy,” he says. “It’s a completely different computation than just using the cues to recall past experiences to make predictions.”

    Armin Lak, a British Iranian postdoctoral researcher working with Kepecs, developed a computer model that predicted how neurons would behave if they were performing this calculation. Then they compared those predictions to the activity of dopamine neurons in the brains of monkeys while they performed a task in which they made a decision informed by visual cues that were sometimes clear, and sometimes not.

    The activity of the neurons was recorded by Kepecs’ collaborators Kensaku Nomoto and Masamichi Sakagami, neuroscientists at Tamagawa University in Tokyo, who were studying how the brain uses sensory information to make decisions. In the task, monkeys watched a screen filled with moving dots, and had to decide whether more dots were moving to the left or to the right, receiving a juice reward when they got the answer right.

    When Kepecs and Lak analyzed the data, they found that the activity of the dopamine neurons closely fit the predictions of their new model — indicating that these neurons were likely evaluating the reliability of the information used during each decision. Applying the computational model was like looking at the data through a new microscope, Kepecs says. “By looking at it this way, you can see things that you couldn’t see before in the data. Suddenly it just made sense. This system is much smarter than we thought before.”

    Surprisingly, Kepecs says, the confidence-signaling neurons became active a fraction of a second before the monkeys’ made a decision. This early activity suggests the neurons may influence the immediate decision making process — a role that the team will explore in future research.