Category Archives: Computer and Technology

Safeguarding Against Identity Theft

safeguarding-against-identity-theftEach and every technology has its own limitations. They provide some sort of advantages and disadvantages too. It is thus very important to take care of the disadvantages that come while accessing technology.

Use of credit cards, online transactions and various other money transactions through the web can some time create havoc in your life. Criminals are no longer constrained with the physical boundaries.

They are also using the latest technical methods to steal money through the internet. Identity fraud is one such kind of common case these days, where the personal information of the credit card, password and many other secret data are compromised.

In a successful online survey, it was clearly revealed that almost all are suffering from the problem of Identity theft problem and have lost huge money due to this leading issue. In an average, people are losing around $500 for these kinds of cases in the form of settlement of legal issues and few more.

The rate of this kind of theft is seriously increasing at a very faster rate. Cyber criminals are increasing their activities due to more dependency of internet to process amount. If you want to avoid such pitfall, then you can check the identity guard reviews to get rid of such situation.

Data breach can take place at any point of time and it is uncertain. Thus, you are advised to take some preventive measures to avoid this problem. Most of the criminals change the contact information of the customers and make use of it. You should take care of the contact information and other data from time to time.

Never ever try to provide your pin and password to unknown identity who are calling you from suspicious numbers. Fraud callers will show themselves as bank person and thus leak out all your details from you. You should be always aware of these fake calls and inform directly to the bank authorities.

Always try to make use of the free monitoring facilities if it is provided by your Card Company or bank. Many people are not aware of it and thus do not make use of it. You should always talk with the bank authorities and make sure to keep your account password and other confidential data safe from others. Monitoring features of your account should be used to avoid misuse of the account details.

Activating fraud alert feature can also help during security breach problem. Criminals will not find it easy to hack down your information and use it suddenly. You will be informed prior to that. It is you responsibility to stay alert and make use of the technology for reaping its benefits.

Thanks to new software

That may be changing. Starting next month, University at Buffalo researchers will be testing in the U.S., Europe, Australia and Latin America a new software tool they developed that could make assessing brain atrophy part of the clinical routine for MS patients. The research is funded by Novartis, as part of its commitment to advance the care for people with MS with effective treatments and tools for assessment of disease activity.

According to the UB researchers, being able to routinely measure how much brain atrophy has occurred would help physicians better predict how a patient’s disease will progress. It could also provide physicians with more information about how well MS treatments are working in individual patients. These and other benefits were outlined in a recent review study the researchers published in Expert Review of Neurotherapeutics.

“Measuring brain atrophy on an annual basis will allow clinicians to identify which of their patients is at highest risk for physical and cognitive decline,” said Robert Zivadinov, MD, PhD, professor of neurology and director of the Buffalo Neuroimaging Analysis Center in the Jacobs School of Medicine and Biomedical Sciences at UB. Over the past 10 years, he and his colleagues at UB, among the world’s most prolific groups studying brain atrophy and MS, developed the world’s largest database of magnetic resonance images of individuals with MS, consisting of 20,000 brain scans with data from about 4,000 MS patients. The new tool, Neurological Software Tool for Reliable Atrophy Measurement in MS, or NeuroSTREAM, simplifies the calculation of brain atrophy based on data from routine magnetic resonance images and compares it with other scans of MS patients in the database.

More than lesions

Without measuring brain atrophy, clinicians cannot obtain a complete picture of how a patient’s disease is progressing, Zivadinov said.

“MS patients experience, on average, about three to four times more annual brain volume loss than a healthy person,” he said. “But a clinician can’t tell a patient, ‘You have lost this amount of brain volume since your last visit.'”

Instead, clinicians rely primarily on the presence of brain lesions to determine how MS is progressing. “Physicians and radiologists can easily count the number of new lesions on an MRI scan,” said Zivadinov, “but lesions are only part of the story related to development of disability in MS patients.”

And even though MS drugs can stop lesions from forming, in many cases brain atrophy and the cognitive and physical decline it causes will continue, the researchers say.

“While the MS field has to continue working on solving challenges related to brain atrophy measurement on individual patient level, its assessment has to be incorporated into treatment monitoring, because in addition to assessment of lesions, it provides an important additional value in determining or explaining the effect of disease-modifying drugs,” Zivadinov and co-authors wrote in a June 23 editorial that was part of a series of commentaries in Multiple Sclerosis Journal addressing the pros and cons of using brain atrophy to guide therapy monitoring in MS.

Soon, the UB researchers will begin gathering data to create a database of brain volume changes in more than 1,000 patients from 30 MS centers in the U.S. and around the world. The objective is to determine if NeuroSTREAM can accurately quantify brain volume changes in MS patients.

The software runs on a user-friendly, cloud-based platform that provides compliance with privacy health regulations such as HIPAA. It is easily available from workstations, laptops, tablets, iPads and smartphones. The ultimate goal is to develop a user-friendly website to which clinicians can upload anonymous scans of patients and receive real-time feedback on what the scans reveal.

NeuroSTREAM measures brain atrophy by measuring a certain part of the brain, called the lateral ventricular volume (LVV), one of the brain structures that contains cerebrospinal fluid. When atrophy occurs, the LVV expands.

Canary in the coal mine

“The ventricles are a surrogate measure of brain atrophy,” said Michael G. Dwyer III, PhD, assistant professor in the Department of Neurology and the Department of Bioinformatics in the Jacobs School of Medicine and Biomedical Sciences at UB. “They’re the canary in the coal mine.”

Dwyer, a computer scientist and director of technical imaging at the Buffalo Neuroimaging Analysis Center, is principal investigator on the NeuroSTREAM software development project. At the American Academy of Neurology meeting in April, he reported preliminary results showing that NeuroSTREAM provided a feasible, accurate, reliable and clinically relevant method of measuring brain atrophy in MS patients, using LVV.

“Usually, you need high-resolution research-quality brain scans to do this,” Dwyer explained, “but our software is designed to work with low resolution scans, the type produced by the MRI machines normally found in clinical practice.”

To successfully measure brain atrophy in a way that’s meaningful for treatment, Zivadinov explained, what’s needed is a normative database through which individual patients can be compared to the population of MS patients. “NeuroSTREAM provides context, because it compares a patient’s brain not just to the general population but to other MS patients,” said Dwyer.

computer bug finder

Researchers at the New York University Tandon School of Engineering, in collaboration with the MIT Lincoln Laboratory and Northeastern University, are taking an unorthodox approach to tackling this problem: Instead of finding and remediating bugs, they’re adding them by the hundreds of thousands.

Brendan Dolan-Gavitt, an assistant professor of computer science and engineering at NYU Tandon, is a co-creator of LAVA, or Large-Scale Automated Vulnerability Addition, a technique of intentionally adding vulnerabilities to a program’s source code to test the limits of bug-finding tools and ultimately help developers improve them. In experiments using LAVA, they showed that many popular bug finders detect merely 2 percent of vulnerabilities.

A paper detailing the research was presented at the IEEE Symposium on Security and Privacy and was published in the conference proceedings. Technical staff members of the MIT Lincoln Laboratory led the technical research: Patrick Hulin, Tim Leek, Frederick Ulrich, and Ryan Whelan. Collaborators from Northeastern University are Engin Kirda, professor of computer and information science; Wil Robertson, assistant professor of computer and information science; and doctoral student Andrea Mambretti.

Dolan-Gavitt explained that the efficacy of bug-finding programs is based on two metrics: the false positive rate and the false negative rate, both of which are notoriously difficult to calculate. It is not unusual for a program to detect a bug that later proves not to be there — a false positive — and to miss vulnerabilities that are actually present — a false negative. Without knowing the total number of real bugs, there is no way to gauge how well these tools perform.

“The only way to evaluate a bug finder is to control the number of bugs in a program, which is exactly what we do with LAVA,” said Dolan-Gavitt. The automated system inserts known quantities of novel vulnerabilities that are synthetic yet possess many of the same attributes as computer bugs in the wild. Dolan-Gavitt and his colleagues dodged the typical five-figure price tag for manual, custom-designed vulnerabilities and instead created an automated system that makes judicious edits in real programs’ source code.

The result: hundreds of thousands of unstudied, highly realistic vulnerabilities that are inexpensive, span the execution lifetime of a program, are embedded in normal control and data flow, and manifest only for a small fraction of inputs lest they shut the entire program down. The researchers had to create novel bugs, and in significant numbers, in order to have a large enough body to study the strengths and shortcomings of bug-finding software. Previously identified vulnerabilities would easily trip current bug finders, skewing the results.

The team tested existing bug-finding software and found that just 2 percent of bugs created by LAVA were detected. Dolan-Gavitt explained that automated bug identification is an extremely complex task that developers are constantly improving. The researchers will share their results to assist these efforts.

Additionally, the team is planning to launch an open competition this summer to allow developers and other researchers to request a LAVA-bugged version of a piece of software, attempt to find the bugs, and receive a score based on their accuracy.

“There has never been a performance benchmark at this scale in this area, and now we have one,” Dolan-Gavitt said. “Developers can compete for bragging rights on who has the highest success rate in bug-finding, and the programs that will come out of the process could be stronger.”

Afford to ignore the human element of IT

“Organizations focus too much on the technical and mechanical aspects of IT errors, rather than the human and environmental aspects of the errors,” said Sumantra Sarkar, assistant professor of information systems in the School of Management. “Our study suggests the mood and personality traits of the software development team affect how they report on self-committed errors in IT projects. A minor glitch in design or programming can have devastating consequences. For example, even a small error in software design could result in a NASA capsule disaster in outer space.”

“The roles of mood and conscientiousness in reporting of self-committed errors on IT projects,” published in the Information Systems Journal, examines how human elements influence IT errors and decision-making. The research also establishes a theoretical framework intended to explain some of the decision-making processes associated with reporting self-committed errors.

Since the research suggests IT errors are caused by a combination of factors, the researchers said that it is important to adopt various procedures to identify inefficiencies, ineffective care and preventable errors to make improvements associated with the IT systems. And, it is important to look at individuals working on information technology teams.

According to the paper, current research on IT error reporting mainly explores the issues related to resources and technology, such as budget shortages, hardware malfunctions or labor shortages.

“We found a difference in the self-committed IT error reporting process of developers depending on if they were in a positive or negative mood,” Sarkar said. “When IT workers were in a positive mood, they were less likely to report on self-committed errors. This can be explained by how being in a positively elevated state can impede one’s cognitive processing.”

The study has managerial implications, too.

“Practitioners often perceive software development as dependent on machines, as opposed to humans, which is not a sustainable mindset,” Sarkar said “Managers should establish a good rapport with team members to foster an environment that will allow employees to speak up when they feel their mood could affect their reporting decisions.”

The paper also states IT managers should emphasize to their employees the benefits of reporting self-committed because, ultimately, IT errors that go unreported could hurt the company more in the long run.

Sarkar said employees should be cognizant how on their mood could impact their reporting decisions. “Before IT workers make decisions regarding self-committed errors, they should assess their mood and determine if they should wait until they are in a more neutral state to make reporting decisions,” he said.

The paper also looks at how the personality trait conscientiousness can influence error-reporting decisions.

“We identified conscientiousness as being one of the most important personality traits related to IT error-reporting decisions. Conscientious workers have a strong sense of duty and selflessness and are more willing to report self-committed errors,” Sarkar said. “Managers should be aware that conscientious team members are less susceptible to the influences of mood on decision making.”

Further the analysis of microbial genomes

The Cloud Infrastructure for Microbial Bioinformatics (or CLIMB project) is a resource for the UK’s medical microbiology community and international partners. It will support their research by providing free cloud-based computing, storage, and analysis tools.

CLIMB is a collaboration between academic and computing staff at the University of Warwick and the Universities of Bath, Birmingham, Cardiff and Swansea.

Professor Mark Pallen iof Microbial Genomics at the University of Warwick is the principal investigator on the project. He said: “CLIMB represents a user-friendly, one-stop shop for sharing software and data between medical microbiologists in the academic and clinical arenas.

“Using the cloud means that rather than dozens, or even hundreds, of research groups across the country having to set up and maintain their own servers, users can access shared pre-configured computational resources on demand.”

Key to the set-up is the concept of virtualisation, which allows users to work in a simulated computer environment populated by virtual machines (VMs), which sit on top of the physical hardware, but look to the user just like conventional servers. Four of the universities involved each has the same equipment installed, which will work as an integrated system. It offers researchers huge data storage capabilities, very high-memory research servers for maximum performance and integration with relevant biological databases.

The project is funded by the UK’s Medical Research Council and is supported by three world-class medical research fellows and two newly refurbished bioinformatics facilities at the Universities of Warwick and Swansea.

With improvements in sequencing technologies, generating genomic data sets has become much easier. However, many academics don’t have the access to the resources that they need to perform the subsequent bioinformatics analyses.

CLIMB will provide them with the ability to do this and to share scripts and pipelines. There are also plans for workshops and meetings to train, share knowledge and develop the microbial bioinformatics community.

Nick Loman, CLIMB research fellow at the University of Birmingham said: “We have already used CLIMB to analyse and share data from the Ebola outbreak in West Africa. This represents a step-change in collaborative working, particularly when faced with public health emergencies.”

Professor Pallen added: “We see CLIMB as more than an academic facility; instead, we hope it will act as a bridge between academics and public health professionals, facilitating sharing of skills, knowledge and approaches between the two communities, as well as exchange of software and data.”

Model of adware and other unwanted software

Few computer users have been spared the nuisance of unwanted software: Following what appears to be a legitimate software update or download, a barrage of advertisements overruns the screen, or a flashing pop-up warns of the presence of malware, demanding the purchase of what is often fraudulent antivirus software. On other occasions, the system’s default browser is hijacked, redirecting to ad-laden pages.

Despite the prevalence of such unwanted software — Google tracks more than 60 million attempted installs per week, three times the number of malware attempts — the source of these installs and the business model underlying the practice were not well understood. The researchers from Google and New York University Tandon School of Engineering conducted the first analysis of the link between commercial pay-per-install (PPI) practices and the distribution of unwanted software.

Kurt Thomas, a research scientist at Google, and Damon McCoy, an assistant professor of computer science and engineering at NYU Tandon, led a team of researchers from Safe Browsing and Chrome Security to investigate commercial PPI schemes as a main vehicle for moving unwanted software from developers to unwitting installers. Their paper, Investigating Commercial Pay-Per-Install and the Distribution of Unwanted Software, will be presented at the USENIX Security Symposium, a top computer security conference, in Austin, Texas, next week.

Commercial PPI is a monetization scheme wherein third-party applications — often consisting of unwanted software such as adware, scareware, and browser hijacking programs — are bundled with legitimate applications in exchange for payment to the legitimate software company. When users install the package, they get the desired piece of software as well as a stream of unwanted programs riding stowaway. Thomas, McCoy, and their colleagues cite reports indicating that commercial PPI is a highly lucrative global business, with one outfit reporting $460 million in revenue in 2014 alone. It should be noted that this revenue reflects a mix of both legitimate as well as unwanted software downloads.

“If you’ve ever downloaded a screen saver or other similar feature for your laptop, you’ve seen a ‘terms and conditions’ page pop up where you consent to the installation,” McCoy explained. “Buried in the text that nobody reads is information about the bundle of unwanted software programs in the package you’re about to download.” The presence of a consent form allows businesses to operate legally, but McCoy classifies the extra applications as “treading a fine line between malware and unwanted software.”

The report explains that PPI businesses operate through a network of affiliates — brokers who forge the deals that bundle advertisements (often unwanted software) with popular software applications, then place download offers on well-trafficked sites where they’re likely to be clicked on. Parties are paid separately — meaning some legitimate developers do not know their products are being bundled with unwanted software — and they are paid as much as two dollars per install.

To better understand the install process, the researchers gained access to four PPI affiliates by routinely downloading the software packages and analyzing the components. Among their more important discoveries was the degree to which such downloaders are personalized to maximize the chances that their payload will be delivered.

When an installer runs, the user’s computer is “fingerprinted” to determine which adware is available to run on that particular machine. Additionally, the downloader searches for antivirus protection, factoring in the presence or absence of such protections in its approach. “They do their best to bypass antivirus, so the program will intentionally inject those elements — whether it’s adware or scareware — that are likeliest to evade whichever antivirus program is running,” McCoy said.

Google has long tracked web pages known to harbor unwanted software offers and continuously updates the Safe Browsing protection in its Chrome browser to warn users when they visit such pages. Yet research shows that PPI affiliates are also adjusting their tactics in an attempt to dodge Safe Browsing detection.

Computers because they come at bad times

unduhan-16A new study from BYU, in collaboration with Google Chrome engineers, finds the status quo of warning messages appearing haphazardly — while people are typing, watching a video, uploading files, etc. — results in up to 90 percent of users disregarding them.

Researchers found these times are less effective because of “dual task interference,” a neural limitation where even simple tasks can’t be simultaneously performed without significant performance loss. Or, in human terms, multitasking.

“We found that the brain can’t handle multitasking very well,” said study coauthor and BYU information systems professor Anthony Vance. “Software developers categorically present these messages without any regard to what the user is doing. They interrupt us constantly and our research shows there’s a high penalty that comes by presenting these messages at random times.”

For example, 74 percent of people in the study ignored security messages that popped up while they were on the way to close a web page window. Another 79 percent ignored the messages if they were watching a video. And a whopping 87 percent disregarded the messages while they were transferring information, in this case, a confirmation code.

“But you can mitigate this problem simply by finessing the timing of the warnings,” said Jeff Jenkins, lead author of the study appearing in Information Systems Research, one of the premier journals of business research. “Waiting to display a warning to when people are not busy doing something else increases their security behavior substantially.”

For example, Jenkins, Vance and BYU colleagues Bonnie Anderson and Brock Kirwan found that people pay the most attention to security messages when they pop up in lower dual task times such as:

  • After watching a video
  • Waiting for a page to load
  • After interacting with a website

The authors realize this all seems pretty common sense, but timing security warnings to appear when a person is more likely ready to respond isn’t current practice in the software industry. Further, they’re the first to show empirically the effects of dual task interference during computer security tasks. In addition to showing what this multitasking does to user behavior, the researchers found what it does to the brain.

For part of the study, researchers had participants complete computer tasks while an fMRI scanner measured their brain activity. The experiment showed neural activity was substantially reduced when security messages interrupted a task, as compared to when a user responded to the security message itself.

The BYU researchers used the functional MRI data as they collaborated with a team of Google Chrome security engineers to identify better times to display security messages during the browsing experience.

Computing boosts energy efficiency

Energy consumption is one of the key challenges of modern computing, whether for wireless embedded client devices or high performance computing centers. The ability to develop energy efficient software is crucial, as the use of data and data processing keeps increasing in all areas of society. The need for power efficient computing is not only due to the environmental impact. Rather, we need energy efficient computing in order to even deliver on the trends predicted.

The EU funded Excess project, which finishes August 31, set out three years ago to take on what the researchers perceived as a lack of holistic, integrated approaches to cover all system layers from hardware to user level software, and the limitations this caused to the exploitation of the existing solutions and their energy efficiency. They initially analyzed where energy-performance is wasted, and based on that knowledge they have developed a framework that should allow for rapid development of energy efficient software production.

“When we started this research program there was a clear lack of tools and mathematical models to help the software engineers to program in an energy efficient way, and also to reason abstractly about the power and energy behavior of her software” says Philippas Tsigas, professor in Computer Engineering at Chalmers University of Technology, and project leader of Excess. “The holistic approach of the project involves both hardware and software components together, enabling the programmer to make power-aware architectural decisions early. This allows for larger energy savings than previous approaches, where software power optimization was often applied as a secondary step, after the initial application was written.”

The Excess project has taken major steps towards providing a set of tools and models to software developers and system designers to allow them to program in an energy efficient way. The tool box spans from fundamentally new energy-saving hardware components, such as the Movidius Myriad platform, to sophisticated efficient libraries and algorithms.

Tests run on large data streaming aggregations, a common operation used in real-time data analytics, has shown impressive results. When using the Excess framework, the programmer can provide a 54 times more energy efficient solution compared to a standard implementation on a high-end PC processor. The holistic Excess approach first presents the hardware benefits, using an embedded processor, and then continues to show the best way to split the computations inside the processor, to even further enhance the performance.

Design a chip that checks for sabotage

Siddharth Garg, an assistant professor of electrical and computer engineering at the NYU Tandon School of Engineering, and fellow researchers are developing a unique solution: a chip with both an embedded module that proves that its calculations are correct and an external module that validates the first module’s proofs.

While software viruses are easy to spot and fix with downloadable patches, deliberately inserted hardware defects are invisible and act surreptitiously. For example, a secretly inserted “back door” function could allow attackers to alter or take over a device or system at a specific time. Garg’s configuration, an example of an approach called “verifiable computing” (VC), keeps tabs on a chip’s performance and can spot telltale signs of Trojans.

The ability to verify has become vital in an electronics age without trust: Gone are the days when a company could design, prototype, and manufacture its own chips. Manufacturing costs are now so high that designs are sent to offshore foundries, where security cannot always be assured.

But under the system proposed by Garg and his colleagues, the verifying processor can be fabricated separately from the chip. “Employing an external verification unit made by a trusted fabricator means that I can go to an untrusted foundry to produce a chip that has not only the circuitry-performing computations, but also a module that presents proofs of correctness,” said Garg.

The chip designer then turns to a trusted foundry to build a separate, less complex module: an ASIC (application-specific integrated circuit), whose sole job is to validate the proofs of correctness generated by the internal module of the untrusted chip.

Garg said that this arrangement provides a safety net for the chip maker and the end user. “Under the current system, I can get a chip back from a foundry with an embedded Trojan. It might not show up during post-fabrication testing, so I’ll send it to the customer,” said Garg. “But two years down the line it could begin misbehaving. The nice thing about our solution is that I don’t have to trust the chip because every time I give it a new input, it produces the output and the proofs of correctness, and the external module lets me continuously validate those proofs.”

An added advantage is that the chip built by the external foundry is smaller, faster, and more power-efficient than the trusted ASIC, sometimes by orders of magnitude. The VC setup can therefore potentially reduce the time, energy, and chip area needed to generate proofs.

“For certain types of computations, it can even outperform the alternative: performing the computation directly on a trusted chip,” Garg said.

The researchers next plan to investigate techniques to reduce both the overhead that generating and verifying proofs imposes on a system and the bandwidth required between the prover and verifier chips. “And because with hardware, the proof is always in the pudding, we plan to prototype our ideas with real silicon chips,” said Garg.

Digital geoscience mapping

Now a unique new software for virtual model interpretation and visualization, is to be presented at the 2nd Virtual Geoscience Conference (VGC 2016) in Bergen, Norway.

The conference will take place on the 21-23 of September, and represents a multidisciplinary forum for geoscience researchers, geomatics and related disciplines to share their latest developments and applications.

Simon Buckley and colleagues at Uni Research CIPR are not just hosting the conference in Bergen, but will present their latest contribution to the field:

High performance 3D viewer

A software called LIME, which is a high performance 3D viewer that can be highly useful for geoscientists returning to their office after fieldwork.

The software allows them to explore their 3D datasets and perform measurements, analysis and advanced visualization of different data types. The software is developed by the Virtual Outcrop Geology Group (VOG), a collaboration between Uni Research CIPR in Bergen and the University of Aberdeen, UK.

“The group has been at forefront of digital outcrop geology for over ten years, pioneering many of the developments in data acquisition, processing, and distribution. To facilitate the interpretation, visualisation and communication of 3D photorealistic models, we have developed LIME for over five years,” Buckley says.

On the researcher’s own laptop

One of the unique things about LIME is that it can be downloaded and used on the researcher’s own laptop, and can handle very large 3D datasets with high performance.

“It allows the users to integrate 3D models from processing software, and do analysis and interpretation, to put together lots of types of data collected in fieldwork,” Buckley explains.

Digital mapping technology for many geoscience applications is based on a combination of 3D mapping methods: laser scanning and photogrammetry — 3D modelling from images — from the ground, from boats, and from helicopters for very large mountainsides.

And more recently: from unmanned aerial vehicles, or drones.

“In addition to this we focus on fusing new imaging techniques for mapping surface properties. An example is hyperspectral imaging, an infrared imaging method that allows thesurface material content of an outcrop, building or drill core to be mapped in detail and remotely. This is what I call phase one of the digital geosciences mapping revolution, which has now become relatively mature,” Buckley says.

Integration of multiple techniques

In phase two, collection of data from digital mapping is becoming ubiquitous, but researchers around the world who are new to using this type of data can still struggle with learning curves, making it difficult for the, to analyze their models, Buckley at Uni Research CIPR underscores. This is the basis for LIME:

“Here is our advantage, as we work on the integration of multiple techniques and data types, interpretation software like LIME, databases for storing, accessing and manipulating the data, and mobile devices — viewing and interpretation on tablets, in the field,” Buckley says.

The models collected using digital mapping techniques, combined with the LIME software, enables geologists to study exposed outcrops and rock formations which are otherwise very difficult to access.

“Looking at details of the outcrop and dropping in new sorts of data all of a sudden becomes easier,” Buckley says. Examples are integration of interpretation panels, geophysical data or a new sedimentary log, which looks at different rock types.

Key features

One of the key features of the high performance 3D viewer, is that you can integrate images and project them on to the 3D models.

“Geoscientists are therefore able to integrate different types of field data, making it a powerful tool,” Buckley explains:

“In the end, we can make a very nice visual representation to show the analysis and the project datasets, which is very useful for geoscientists who want to present their results, for example to their collaborating partners and sponsors, to the public, or at conferences,” Buckley says.