NEW SMARTPHONE APP CAN DETECT NEWBORN JAUNDICE IN MINUTES

NEW SMARTPHONE APP CAN DETECT NEWBORN JAUNDICE IN MINUTES

From the FMS Global News Desk of Jeanne Hambleton Released: 27-Aug-2014               Source : University of Washington  By Michelle Ma. Citations Association for Computing Machinery’s International Joint Conference on Pervasive and Ubiquitous Computing

BiliCam_baby1

Newswise — Newborn jaundice: It is one of the last things a parent wants to deal with, but it is unfortunately a common condition in babies less than a week old.

Skin that turns yellow can be a sure sign that a newborn is jaundiced and is not adequately eliminating the chemical bilirubin. But that discoloration is sometimes hard to see, and severe jaundice left untreated can harm a baby.

University of Washington engineers and physicians have developed a smartphone application that checks for jaundice in newborns and can deliver results to parents and pediatricians within minutes. It could serve as a screening tool to determine whether a baby needs a blood test – the gold standard for detecting high levels of bilirubin.

“Virtually every baby gets jaundiced, and we’re sending them home from the hospital even before bilirubin levels reach their peak,” said James Taylor, a UW professor of pediatrics and medical director of the newborn nursery at UW Medical Center.

“This smartphone test is really for babies in the first few days after they go home. A parent or health care provider can get an accurate picture of bilirubin to bridge the gap after leaving the hospital.”

The research team will present its results at the Association for Computing Machinery’s International Joint Conference on Pervasive and Ubiquitous Computing in September in Seattle.

The app, called BiliCam, uses a smartphone’s camera and flash and a color calibration card the size of a business card. A parent or health care professional would download the app, place the card on her baby’s belly, then take a picture with the card in view. The card calibrates and accounts for different lighting conditions and skin tones. Data from the photo are sent to the cloud and are analyzed by machine-learning algorithms, and a report on the newborn’s bilirubin levels is sent almost instantly to the parent’s phone.

“This is a way to provide peace of mind for the parents of newborns,” said Shwetak Patel, a UW associate professor of computer science and engineering and of electrical engineering.

“The advantage of doing the analysis in the cloud is that our algorithms can be improved over time.”

A noninvasive jaundice screening tool is available in some hospitals and clinics, but the instrument costs several thousand dollars and isn’t feasible for home use. Currently, both doctors and parents assess jaundice by looking for the yellow color in a newborn’s skin, but this visual assessment is only moderately accurate.

The UW team developed BiliCam to be easy to use and affordable for both clinicians and parents, especially during the first several days after birth when it is crucial to check for jaundice.

Jaundice, or the yellowing of the skin, can happen when an excess amount of bilirubin collects in the blood. Bilirubin is a natural byproduct of the breakdown of red blood cells, which the liver usually metabolizes. But newborns often metabolize bilirubin slower because their livers are not yet fully functioning. If left untreated, severe jaundice can cause brain damage and a potentially fatal condition called kernicterus.

The UW team ran a clinical study with 100 newborns and their families at UW Medical Center. They used a blood test, the current screening tool used in hospitals, and BiliCam to test the babies when they were between two and five days old. They found that BiliCam performed as well as or better than the current screening tool. Though it would not replace a blood test, BiliCam could let parents know if they should take that next step.

“BiliCam would be a significantly cheaper and more accessible option than the existing reliable screening methods,” said Lilian de Greef, lead author and a UW doctoral student in computer science and engineering.

“Lowering the access barrier to medical applications can have profound effects on patients, their caregivers and their doctors, especially for something as prevalent as newborn jaundice.”

The researchers plan to test BiliCam on up to 1,000 additional newborns, especially those with darker skin pigments. The algorithms will then be robust enough to account for all ethnicities and skin colors. This could make BiliCam a useful tool for parents and health care workers in developing countries where jaundice accounts for many newborn deaths.

“We’re really excited about the potential of this in resource-poor areas, something that can make a difference in places where there are not tools to measure bilirubin but there is good infrastructure for mobile phones,” Taylor said.

Within a year, the researchers say BiliCam could be used by doctors as an alternative to the current screening procedures for bilirubin. They have filed patents on the technology, and within a couple of years hope to have Federal Drug Administration approval for the BiliCam app that parents can use at home on their smartphones.

Other members of the research team are Mayank Goel and Min Joon Seo, UW doctoral students in computer science and engineering; Eric Larson of Southern Methodist University; and James Stout of the UW pediatrics department.

This research is funded by the Coulter Foundation and a National Science Foundation Graduate Research Fellowship. For more information BiliCam website: http://www.bilicam.com/

 

BABBLING BABIES – RESPONDING TO ONE-ON-ONE ‘BABY TALK’ – MASTER MORE WORDS

From the FMS Global News Desk of Jeanne Hambleton                                                University of Washington   By Molly McElroy UW Today.

A year-old baby sits in a brain scanner, called magnetoencephalography — a noninvasive approach to measuring brain activity. The baby listens to speech sounds like “da” and “ta” played over headphones while researchers record her brain responses.

Common advice to new parents is that the more words babies hear the faster their vocabulary grows. Now new findings show that what spurs early language development is not so much the quantity of words as the style of speech and social context in which speech occurs.

Researchers at the University of Washington and University of Connecticut examined thousands of 30-second snippets of verbal exchanges between parents and babies. They measured parents’ use of a regular speaking voice versus an exaggerated, animated baby talk style, and whether speech occurred one-on-one between parent and child or in group settings.

“What our analysis shows is that the prevalence of baby talk in one-on-one conversations with children is linked to better language development, both concurrent and future,” said Patricia Kuhl, co-author and co-director of UW’s Institute for Learning & Brain Sciences.

The more parents exaggerated vowels – for example “How are youuuuu?” – and raised the pitch of their voices, the more the 1-year olds babbled, which is a forerunner of word production. Baby talk was most effective when a parent spoke with a child individually, without other adults or children around.

“The fact that the infant’s babbling itself plays a role in future language development shows how important the interchange between parent and child is,” Kuhl said.

The findings will be published in an upcoming issue of the journal Developmental Science.

Twenty-six babies about 1 year of age wore vests containing audio recorders that collected sounds from the children’s auditory environment for eight hours a day for four days. The researchers used LENA (“language environment analysis”) software to examine 4,075 30-second intervals of recorded speech. Within those segments, the researchers identified who was talking in each segment, how many people were there, whether baby talk – also known as “parentese” – or regular voice was used, and other variables.

When the babies were 2 years old, parents filled out a questionnaire measuring how many words their children knew. Infants who had heard more baby talk knew more words. In the study, 2-year olds in families who spoke the most baby talk in a one-on-one social context knew 433 words, on average, compared with the 169 words recognized by 2-year olds in families who used the least babytalk in one-on-one situations.

The relationship between baby talk and language development persisted across socioeconomic status and despite there only being 26 families in the study.

“Some parents produce baby talk naturally and they do not realize they are benefiting their children,” said first author Nairán Ramírez-Esparza, an assistant psychology professor at the University of Connecticut.

“Some families are more quiet, not talking all the time. But it helps to make an effort to talk more.”

Previous studies have focused on the amount of language babies hear, without considering the social context. The new study shows that quality, not quantity, is what matters.

“What this study is adding is that how you talk to children matters. Parentese is much better at developing language than regular speech, and even better if it occurs in a one-on-one interaction,” Ramirez-Esparza said.

Parents can use baby talk when going about everyday activities, saying things like, “Where are your shoooes?,” “Let’s change your diiiiaper,” and “Oh, this tastes goooood!,” emphasizing important words and speaking slowly using a happy tone of voice.

“It is not just talk, talk, talk at the child,” said Kuhl. “It is more important to work toward interaction and engagement around language. You want to engage the infant and get the baby to babble back. The more you get that serve and volley going, the more language advances.”

A National Science Foundation Science of Learning Program to the UW-hosted LIFE Center funded the study.

 

MONTHS BEFORE THEIR FIRST WORDS, BABIES’ BRAINS REHEARSE SPEECH MECHANICS

From the FMS Global News Desk of Jeanne Hambleton                                 University of Washington By Molly McElroy UW Today

Baby Babbles. PICInstitute for Learning & Brain Sciences, UW  A year-old baby sits in a brain scanner, called magnetoencephalography — a noninvasive approach to measuring brain activity. The baby listens to speech sounds like “da” and “ta” played over headphones while researchers record her brain responses.

Infants can tell the difference between sounds of all languages until about 8 months of age when their brains start to focus only on the sounds they hear around them. It has been unclear how this transition occurs, but social interactions and caregivers’ use of exaggerated “parentese” style of speech seem to help.

University of Washington research in 7- and 11-month-old infants shows that speech sounds stimulate areas of the brain that coordinate and plan motor movements for speech.

The study, published July 14 in the Proceedings of the National Academy of Sciences, suggests that baby brains start laying down the groundwork of how to form words long before they actually begin to speak, and this may affect the developmental transition.

“Most babies babble by 7 months, but do not utter their first words until after their first birthdays,” said lead author Patricia Kuhl, who is the co-director of the UW’s Institute for Learning and Brain Sciences.

“Finding activation in motor areas of the brain when infants are simply listening is significant, because it means the baby brain is engaged in trying to talk back right from the start and suggests that 7-month-olds’ brains are already trying to figure out how to make the right movements that will produce words.”

Kuhl and her research team believe this practice at motor planning contributes to the transition when infants become more sensitive to their native language.

The results emphasize the importance of talking to kids during social interactions even if they are not talking back yet.

“Hearing us talk exercises the action areas of infants’ brains, going beyond what we thought happens when we talk to them,” Kuhl said. “Infants’ brains are preparing them to act on the world by practicing how to speak before they actually say a word.”

In the experiment, infants sat in a brain scanner that measures brain activation through a noninvasive technique called magnetoencephalography. Nicknamed MEG, the brain scanner resembles an egg-shaped vintage hair dryer and is completely safe for infants. The Institute for Learning and Brain Sciences was the first in the world to use such a tool to study babies while they engaged in a task.

The  57 babies,  7- and 11- or 12-month-olds, each listened to a series of native and foreign language syllables such as “da” and “ta” as researchers recorded brain responses. Listen to what they heard in English:

The researchers observed brain activity in an auditory area of the brain called the superior temporal gyrus, as well as in Broca’s area and the cerebellum, cortical regions responsible for planning the motor movements required for producing speech.

This pattern of brain activation occurred for sounds in the 7-month-olds’ native language (English) as well as in a non-native language (Spanish), showing that at this early age infants are responding to all speech sounds, whether or not they have heard the sounds before.

In the older infants, brain activation was different. By 11-12 months, infants’ brains increase motor activation to the non-native speech sounds relative to native speech, which the researchers interpret as showing that it takes more effort for the baby brain to predict which movements create non-native speech. This reflects an effect of experience between 7 and 11 months, and suggests that activation in motor brain areas is contributing to the transition in early speech perception.

The study has social implications, suggesting that the slow and exaggerated parentese speech – “Hiiiii! How are youuuuu?” – may actually prompt infants to try to synthesize utterances themselves and imitate what they heard, uttering something like “Ahhh bah bah baaah.”

“Parentese is very exaggerated, and when infants hear it, their brains may find it easier to model the motor movements necessary to speak,” Kuhl said.

Co-authors Rey Ramirez, Alexis Bosseler, Jo-Fu Lotus Lin and Toshiaki Imada are all with UW’s Institute for Learning & Brain Sciences.

The National Science Foundation Science of Learning Program grant to the UW’s LIFE Center funded the study.

See you tomorrow. Jeanne

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s