Monthly Archives: August 2016

Thanks to new software

That may be changing. Starting next month, University at Buffalo researchers will be testing in the U.S., Europe, Australia and Latin America a new software tool they developed that could make assessing brain atrophy part of the clinical routine for MS patients. The research is funded by Novartis, as part of its commitment to advance the care for people with MS with effective treatments and tools for assessment of disease activity.

According to the UB researchers, being able to routinely measure how much brain atrophy has occurred would help physicians better predict how a patient’s disease will progress. It could also provide physicians with more information about how well MS treatments are working in individual patients. These and other benefits were outlined in a recent review study the researchers published in Expert Review of Neurotherapeutics.

“Measuring brain atrophy on an annual basis will allow clinicians to identify which of their patients is at highest risk for physical and cognitive decline,” said Robert Zivadinov, MD, PhD, professor of neurology and director of the Buffalo Neuroimaging Analysis Center in the Jacobs School of Medicine and Biomedical Sciences at UB. Over the past 10 years, he and his colleagues at UB, among the world’s most prolific groups studying brain atrophy and MS, developed the world’s largest database of magnetic resonance images of individuals with MS, consisting of 20,000 brain scans with data from about 4,000 MS patients. The new tool, Neurological Software Tool for Reliable Atrophy Measurement in MS, or NeuroSTREAM, simplifies the calculation of brain atrophy based on data from routine magnetic resonance images and compares it with other scans of MS patients in the database.

More than lesions

Without measuring brain atrophy, clinicians cannot obtain a complete picture of how a patient’s disease is progressing, Zivadinov said.

“MS patients experience, on average, about three to four times more annual brain volume loss than a healthy person,” he said. “But a clinician can’t tell a patient, ‘You have lost this amount of brain volume since your last visit.'”

Instead, clinicians rely primarily on the presence of brain lesions to determine how MS is progressing. “Physicians and radiologists can easily count the number of new lesions on an MRI scan,” said Zivadinov, “but lesions are only part of the story related to development of disability in MS patients.”

And even though MS drugs can stop lesions from forming, in many cases brain atrophy and the cognitive and physical decline it causes will continue, the researchers say.

“While the MS field has to continue working on solving challenges related to brain atrophy measurement on individual patient level, its assessment has to be incorporated into treatment monitoring, because in addition to assessment of lesions, it provides an important additional value in determining or explaining the effect of disease-modifying drugs,” Zivadinov and co-authors wrote in a June 23 editorial that was part of a series of commentaries in Multiple Sclerosis Journal addressing the pros and cons of using brain atrophy to guide therapy monitoring in MS.

Soon, the UB researchers will begin gathering data to create a database of brain volume changes in more than 1,000 patients from 30 MS centers in the U.S. and around the world. The objective is to determine if NeuroSTREAM can accurately quantify brain volume changes in MS patients.

The software runs on a user-friendly, cloud-based platform that provides compliance with privacy health regulations such as HIPAA. It is easily available from workstations, laptops, tablets, iPads and smartphones. The ultimate goal is to develop a user-friendly website to which clinicians can upload anonymous scans of patients and receive real-time feedback on what the scans reveal.

NeuroSTREAM measures brain atrophy by measuring a certain part of the brain, called the lateral ventricular volume (LVV), one of the brain structures that contains cerebrospinal fluid. When atrophy occurs, the LVV expands.

Canary in the coal mine

“The ventricles are a surrogate measure of brain atrophy,” said Michael G. Dwyer III, PhD, assistant professor in the Department of Neurology and the Department of Bioinformatics in the Jacobs School of Medicine and Biomedical Sciences at UB. “They’re the canary in the coal mine.”

Dwyer, a computer scientist and director of technical imaging at the Buffalo Neuroimaging Analysis Center, is principal investigator on the NeuroSTREAM software development project. At the American Academy of Neurology meeting in April, he reported preliminary results showing that NeuroSTREAM provided a feasible, accurate, reliable and clinically relevant method of measuring brain atrophy in MS patients, using LVV.

“Usually, you need high-resolution research-quality brain scans to do this,” Dwyer explained, “but our software is designed to work with low resolution scans, the type produced by the MRI machines normally found in clinical practice.”

To successfully measure brain atrophy in a way that’s meaningful for treatment, Zivadinov explained, what’s needed is a normative database through which individual patients can be compared to the population of MS patients. “NeuroSTREAM provides context, because it compares a patient’s brain not just to the general population but to other MS patients,” said Dwyer.

computer bug finder

Researchers at the New York University Tandon School of Engineering, in collaboration with the MIT Lincoln Laboratory and Northeastern University, are taking an unorthodox approach to tackling this problem: Instead of finding and remediating bugs, they’re adding them by the hundreds of thousands.

Brendan Dolan-Gavitt, an assistant professor of computer science and engineering at NYU Tandon, is a co-creator of LAVA, or Large-Scale Automated Vulnerability Addition, a technique of intentionally adding vulnerabilities to a program’s source code to test the limits of bug-finding tools and ultimately help developers improve them. In experiments using LAVA, they showed that many popular bug finders detect merely 2 percent of vulnerabilities.

A paper detailing the research was presented at the IEEE Symposium on Security and Privacy and was published in the conference proceedings. Technical staff members of the MIT Lincoln Laboratory led the technical research: Patrick Hulin, Tim Leek, Frederick Ulrich, and Ryan Whelan. Collaborators from Northeastern University are Engin Kirda, professor of computer and information science; Wil Robertson, assistant professor of computer and information science; and doctoral student Andrea Mambretti.

Dolan-Gavitt explained that the efficacy of bug-finding programs is based on two metrics: the false positive rate and the false negative rate, both of which are notoriously difficult to calculate. It is not unusual for a program to detect a bug that later proves not to be there — a false positive — and to miss vulnerabilities that are actually present — a false negative. Without knowing the total number of real bugs, there is no way to gauge how well these tools perform.

“The only way to evaluate a bug finder is to control the number of bugs in a program, which is exactly what we do with LAVA,” said Dolan-Gavitt. The automated system inserts known quantities of novel vulnerabilities that are synthetic yet possess many of the same attributes as computer bugs in the wild. Dolan-Gavitt and his colleagues dodged the typical five-figure price tag for manual, custom-designed vulnerabilities and instead created an automated system that makes judicious edits in real programs’ source code.

The result: hundreds of thousands of unstudied, highly realistic vulnerabilities that are inexpensive, span the execution lifetime of a program, are embedded in normal control and data flow, and manifest only for a small fraction of inputs lest they shut the entire program down. The researchers had to create novel bugs, and in significant numbers, in order to have a large enough body to study the strengths and shortcomings of bug-finding software. Previously identified vulnerabilities would easily trip current bug finders, skewing the results.

The team tested existing bug-finding software and found that just 2 percent of bugs created by LAVA were detected. Dolan-Gavitt explained that automated bug identification is an extremely complex task that developers are constantly improving. The researchers will share their results to assist these efforts.

Additionally, the team is planning to launch an open competition this summer to allow developers and other researchers to request a LAVA-bugged version of a piece of software, attempt to find the bugs, and receive a score based on their accuracy.

“There has never been a performance benchmark at this scale in this area, and now we have one,” Dolan-Gavitt said. “Developers can compete for bragging rights on who has the highest success rate in bug-finding, and the programs that will come out of the process could be stronger.”

Afford to ignore the human element of IT

“Organizations focus too much on the technical and mechanical aspects of IT errors, rather than the human and environmental aspects of the errors,” said Sumantra Sarkar, assistant professor of information systems in the School of Management. “Our study suggests the mood and personality traits of the software development team affect how they report on self-committed errors in IT projects. A minor glitch in design or programming can have devastating consequences. For example, even a small error in software design could result in a NASA capsule disaster in outer space.”

“The roles of mood and conscientiousness in reporting of self-committed errors on IT projects,” published in the Information Systems Journal, examines how human elements influence IT errors and decision-making. The research also establishes a theoretical framework intended to explain some of the decision-making processes associated with reporting self-committed errors.

Since the research suggests IT errors are caused by a combination of factors, the researchers said that it is important to adopt various procedures to identify inefficiencies, ineffective care and preventable errors to make improvements associated with the IT systems. And, it is important to look at individuals working on information technology teams.

According to the paper, current research on IT error reporting mainly explores the issues related to resources and technology, such as budget shortages, hardware malfunctions or labor shortages.

“We found a difference in the self-committed IT error reporting process of developers depending on if they were in a positive or negative mood,” Sarkar said. “When IT workers were in a positive mood, they were less likely to report on self-committed errors. This can be explained by how being in a positively elevated state can impede one’s cognitive processing.”

The study has managerial implications, too.

“Practitioners often perceive software development as dependent on machines, as opposed to humans, which is not a sustainable mindset,” Sarkar said “Managers should establish a good rapport with team members to foster an environment that will allow employees to speak up when they feel their mood could affect their reporting decisions.”

The paper also states IT managers should emphasize to their employees the benefits of reporting self-committed because, ultimately, IT errors that go unreported could hurt the company more in the long run.

Sarkar said employees should be cognizant how on their mood could impact their reporting decisions. “Before IT workers make decisions regarding self-committed errors, they should assess their mood and determine if they should wait until they are in a more neutral state to make reporting decisions,” he said.

The paper also looks at how the personality trait conscientiousness can influence error-reporting decisions.

“We identified conscientiousness as being one of the most important personality traits related to IT error-reporting decisions. Conscientious workers have a strong sense of duty and selflessness and are more willing to report self-committed errors,” Sarkar said. “Managers should be aware that conscientious team members are less susceptible to the influences of mood on decision making.”