Monthly Archives: June 2016

Computers because they come at bad times

unduhan-16A new study from BYU, in collaboration with Google Chrome engineers, finds the status quo of warning messages appearing haphazardly — while people are typing, watching a video, uploading files, etc. — results in up to 90 percent of users disregarding them.

Researchers found these times are less effective because of “dual task interference,” a neural limitation where even simple tasks can’t be simultaneously performed without significant performance loss. Or, in human terms, multitasking.

“We found that the brain can’t handle multitasking very well,” said study coauthor and BYU information systems professor Anthony Vance. “Software developers categorically present these messages without any regard to what the user is doing. They interrupt us constantly and our research shows there’s a high penalty that comes by presenting these messages at random times.”

For example, 74 percent of people in the study ignored security messages that popped up while they were on the way to close a web page window. Another 79 percent ignored the messages if they were watching a video. And a whopping 87 percent disregarded the messages while they were transferring information, in this case, a confirmation code.

“But you can mitigate this problem simply by finessing the timing of the warnings,” said Jeff Jenkins, lead author of the study appearing in Information Systems Research, one of the premier journals of business research. “Waiting to display a warning to when people are not busy doing something else increases their security behavior substantially.”

For example, Jenkins, Vance and BYU colleagues Bonnie Anderson and Brock Kirwan found that people pay the most attention to security messages when they pop up in lower dual task times such as:

  • After watching a video
  • Waiting for a page to load
  • After interacting with a website

The authors realize this all seems pretty common sense, but timing security warnings to appear when a person is more likely ready to respond isn’t current practice in the software industry. Further, they’re the first to show empirically the effects of dual task interference during computer security tasks. In addition to showing what this multitasking does to user behavior, the researchers found what it does to the brain.

For part of the study, researchers had participants complete computer tasks while an fMRI scanner measured their brain activity. The experiment showed neural activity was substantially reduced when security messages interrupted a task, as compared to when a user responded to the security message itself.

The BYU researchers used the functional MRI data as they collaborated with a team of Google Chrome security engineers to identify better times to display security messages during the browsing experience.

Computing boosts energy efficiency

Energy consumption is one of the key challenges of modern computing, whether for wireless embedded client devices or high performance computing centers. The ability to develop energy efficient software is crucial, as the use of data and data processing keeps increasing in all areas of society. The need for power efficient computing is not only due to the environmental impact. Rather, we need energy efficient computing in order to even deliver on the trends predicted.

The EU funded Excess project, which finishes August 31, set out three years ago to take on what the researchers perceived as a lack of holistic, integrated approaches to cover all system layers from hardware to user level software, and the limitations this caused to the exploitation of the existing solutions and their energy efficiency. They initially analyzed where energy-performance is wasted, and based on that knowledge they have developed a framework that should allow for rapid development of energy efficient software production.

“When we started this research program there was a clear lack of tools and mathematical models to help the software engineers to program in an energy efficient way, and also to reason abstractly about the power and energy behavior of her software” says Philippas Tsigas, professor in Computer Engineering at Chalmers University of Technology, and project leader of Excess. “The holistic approach of the project involves both hardware and software components together, enabling the programmer to make power-aware architectural decisions early. This allows for larger energy savings than previous approaches, where software power optimization was often applied as a secondary step, after the initial application was written.”

The Excess project has taken major steps towards providing a set of tools and models to software developers and system designers to allow them to program in an energy efficient way. The tool box spans from fundamentally new energy-saving hardware components, such as the Movidius Myriad platform, to sophisticated efficient libraries and algorithms.

Tests run on large data streaming aggregations, a common operation used in real-time data analytics, has shown impressive results. When using the Excess framework, the programmer can provide a 54 times more energy efficient solution compared to a standard implementation on a high-end PC processor. The holistic Excess approach first presents the hardware benefits, using an embedded processor, and then continues to show the best way to split the computations inside the processor, to even further enhance the performance.

Design a chip that checks for sabotage

Siddharth Garg, an assistant professor of electrical and computer engineering at the NYU Tandon School of Engineering, and fellow researchers are developing a unique solution: a chip with both an embedded module that proves that its calculations are correct and an external module that validates the first module’s proofs.

While software viruses are easy to spot and fix with downloadable patches, deliberately inserted hardware defects are invisible and act surreptitiously. For example, a secretly inserted “back door” function could allow attackers to alter or take over a device or system at a specific time. Garg’s configuration, an example of an approach called “verifiable computing” (VC), keeps tabs on a chip’s performance and can spot telltale signs of Trojans.

The ability to verify has become vital in an electronics age without trust: Gone are the days when a company could design, prototype, and manufacture its own chips. Manufacturing costs are now so high that designs are sent to offshore foundries, where security cannot always be assured.

But under the system proposed by Garg and his colleagues, the verifying processor can be fabricated separately from the chip. “Employing an external verification unit made by a trusted fabricator means that I can go to an untrusted foundry to produce a chip that has not only the circuitry-performing computations, but also a module that presents proofs of correctness,” said Garg.

The chip designer then turns to a trusted foundry to build a separate, less complex module: an ASIC (application-specific integrated circuit), whose sole job is to validate the proofs of correctness generated by the internal module of the untrusted chip.

Garg said that this arrangement provides a safety net for the chip maker and the end user. “Under the current system, I can get a chip back from a foundry with an embedded Trojan. It might not show up during post-fabrication testing, so I’ll send it to the customer,” said Garg. “But two years down the line it could begin misbehaving. The nice thing about our solution is that I don’t have to trust the chip because every time I give it a new input, it produces the output and the proofs of correctness, and the external module lets me continuously validate those proofs.”

An added advantage is that the chip built by the external foundry is smaller, faster, and more power-efficient than the trusted ASIC, sometimes by orders of magnitude. The VC setup can therefore potentially reduce the time, energy, and chip area needed to generate proofs.

“For certain types of computations, it can even outperform the alternative: performing the computation directly on a trusted chip,” Garg said.

The researchers next plan to investigate techniques to reduce both the overhead that generating and verifying proofs imposes on a system and the bandwidth required between the prover and verifier chips. “And because with hardware, the proof is always in the pudding, we plan to prototype our ideas with real silicon chips,” said Garg.