Saturday, 7 June 2014

Why OpenSSL Being Patched Again Is Good News

Man In The Middle
There is a new version of OpenSSL, and, yes, it turns out previous versions of the security package had some serious vulnerabilities. However, these flaws being found is a good thing; we aren't looking at a disaster of Heartbleed proportions.
At first glance, the OpenSSL advisory listing all seven vulnerabilities that have been fixed in OpenSSL appears to be a scary list. One of the flaws, if exploited, could allow an attacker to see and modify traffic between an OpenSSL client and OpenSSL server in a man-in-the-middle attack. The issue is present on all client versions of OpenSSL and server 1.0.1 or 1.0.2-beta1. For the attack to succeed—and it's fairly complicated to begin with—vulnerable versions of both the client and server need to be present.
Even though the extent of the issue is very limited, perhaps you are concerned about continuing to use software with OpenSSL included. First, Heartbleed. Now, man-in-the-middle attacks. Focusing on the fact that OpenSSL has bugs (what software doesn't?) misses a very critical point: They are being patched.
More Eyes, More Security
The fact that developers are disclosing these bugs—and fixing them—is reassuring, because it means we have more eyeballs on the OpenSSL source code. More people are scrutinizing each line for potential vulnerabilities. After the disclosure of the Heartbleed bug earlier this year, many people were surprised to discover the project did not have a lot of funding or many dedicated developers despite its widespread use.
"It [OpenSSL] deserves the attention from the security community it is receiving now," said Wim Remes, managing consultant for IOActive.
A consortium of tech giants, including Microsoft, Adobe, Amazon, Dell, Google, IBM, Intel, and Cisco, banded together with the Linux Foundation to form the Core Infrastructure Initiative (CII). CII funds open source projects to add full-time developers, conduct security audits, and improve testing infrastructure. OpenSSL was the first project funded under CII; Network Time Protocol and OpenSSH are also being supported.
"The community has risen to the challenge to ensure that OpenSSL becomes a better product and that issues are found and fixed quickly," said Steve Pate, chief architect at HyTrust.
Should You Worry?
If you are a system administrator, you must update OpenSSL. More bugs will be found and fixed, so administrators must keep an eye out for patches to keep the software up to date.
For most consumers, there is not a lot to worry about. In order to exploit the bug, OpenSSL needs to be present at both ends of the communication, and that typically doesn't happen in Web browsing, said Ivan Ristic, director of engineering at Qualys. Desktop browsers do not rely on OpenSSL, and, even though the stock Web browser on Android devices and Chrome for Android both use OpenSSL. "The conditions necessary for exploitation are quite a bit harder to find," Ristic said. The fact that exploitation requires man-in-the-middle positioning is "limiting," he said.
OpenSSL is often used in command line utilities and for programmatic access, so users need to update right away. And any software application they use that utilizes OpenSSL should be updated as soon as new versions become available.
Update the software and "prepare for frequent updates in OpenSSL's future as these are not the last bugs that will be found in this software package," warned Wolfgang Kandek, CTO of Qualys.

TrueCrypt Shut Down; What to Use Now to Encrypt Your Data

TrueCrypt Dead If you use TrueCrypt to encrypt your data, you need to switch to a different encryption software to protect your files, and even whole hard drives.
The open source and freely available TrueCrypt software has been popular for the past ten years because it was perceived to be independent from major vendors. The creators of the software have not been publicly identified. Edward Snowden allegedly used TrueCrypt, and security expert Bruce Schneier was another well-known supporter of the software. The tool made it easy to turn a flash drive or a hard drive into an encrypted volume, securing all the data stored on it from prying eyes.
The mysterious creators abruptly shut down TrueCrypt on Wednesday, claiming it was unsafe to use. "WARNING: Using TrustCrypt is not secure as it may contain unfixed security issues," read the text on TrueCrypt's SourceForge page. "You should migrate any data encrypted by TrueCrypt to encrypted disks or virtual disk images supported on your platform," the message said.
"It's time to start looking for an alternative way to encrypt your files and hard drive," wrote independent security consultant Graham Cluley.
Consensus: Not a HoaxAt first, there were concerns that some malicious attackers had defaced the site, but it's becoming increasingly clear this is not a hoax. The SourceForge site now offers an updated version of TrueCrypt (digitally signed by the developers so this isn't a hack) which pops up an alert during the installation process to inform users they should use BitLocker or some other tool.
"I think it's unlikely that an unknown hacker identified the TrueCrypt developers, stole their signing key, and hacked their site," said Matthew Green, a professor specializing in cryptography at Johns Hopkins University.
What to Do NextThe site, as well as the popup alert on the software, has instructions on transferring TrueCrypt-encrypted files to Microsoft's BitLocker service, which is built into Microsoft Vista Ultimate and Enterprise, Windows 7 Ultimate and Enterprise, and Windows 8 Pro and Enterprise. TrueCrypt version 7.2 lets users decrypt their files but won't let them create new encrypted volumes.
While BitLocker is the obvious alternative, there are other options to look at. Schneier told The Register he is switching back to Symantec's PGPDisk to encrypt his data. Symantec Drive Encrpytion ($110 for a single user license) uses PGP, which is a well-known encryption method. There are other free tools for Windows, such as DiskCryptor. Security expert The Grugq put together a list of TrueCrypt alternatives last year, which is still useful.
SANS Institute's Johannes Ullrich recommended that Mac OS X users stick with FileVault 2, which is built into OS X 10.7 (Lion) and later. FileVault uses the XTS-AES 128-bit cipher, which is the same one used by the NSA. Linux users should stick with the built-in Linux Unified Key Setup (LUKS), Ullrich said. If you use Ubuntu, the operating system installer has the option to turn on full disk encryption right from the start.
However, users will need a different tool for portable drives that move between different operating system. "PGP/GnuPG comes to mind," Ullrich said on the InfoSec Handlers Diary.
German company Steganos is offering an older version of their encryption tool (version 15 is their latest, but the offer is for version 14) for free to users, which isn't really that ideal.
Unknown VulnerabilitiesThe fact that TrueCrypt may have security vulnerabilities is jarring considering that an independent audit for the software is currently under way and there had been no such reports. Supporters raised $70,000 for the audit because of concerns the National Security Agency has the capability to decode significant amounts of encrypted data. The first phase of the investigation which looked at the TrueCrypt bootloader was released just last month. It "found no evidence of backdoors or intentional flaws." The next phase, which would examine the cryptography used by the software, was scheduled to complete this summer.
Green, who was one of the people involved with the audit, said he did not have advance warning of what the TrueCrypt developers planned. "Last I heard from Truecrypt: 'We are looking forward to results of phase 2 of your audit. That you very much for all your efforts again!'" he posted on Twitter. The audit is expected to continue despite the shutdown.
It's possible that the creators of the software decided to stop development because the tool is so old. Development "ended in 5/2014 after Microsoft terminated support of Windows XP," said the message on SourceForge. "Windows 8/7/Vista and later offered integrated support for encrypted disks and virtual disk images." With encryption built into many of the operating systems by default, the developers may have felt the software was no longer necessary.
To make things even murkier, it appears a ticket was added May 19 to remove TrueCrypt from the secure operating system Tails (also another Snowden favorite). Whatever is the case, it's clear nobody should be using the software at this point, Cluley warned.
"Whether hoax, hack, or genuine end-of-life for TrueCrypt, it's clear that no security conscious-users are going to feel comfortable trusting the software after this debacle," wrote Cluley.

IBM Identifies You By Your Web Surfing Style

IBM Fraud Detection Patent Services like 2Checkout protect online businesses against fraud by analyzing transactions for signs of fraud. IBM appears to be taking this concept to the next level. The company announced it has patented a new "user-browser interaction-based fraud detection system" that should help online and cloud-based businesses detect fraudulent behavior. According to the press release, this new technology can "help web site operators, cloud service providers and mobile application developers more efficiently and effectively detect and deal with threats by using analytics to thwart fraudsters."
You Are How You Surf
Just as some experts can spot a physical imposter by noticing a difference in gait, the IBM invention tracks minute details about the way users interact with the browser and website. Using some areas of the site more than others, navigating with the keyboard, sticking strictly with the mouse, swiping a tablet in a particular way...all of these small characteristics build up an overall profile that identifies you, the legitimate user.
A hacker who logs into your account will "look" completely different, so the detection system will send a warning. The secure site can respond by requiring additional authentication. Of course, false detections are possible. Keith Walker, IBM Master Inventor and co-inventor on the patent, noted that a change in interaction could be "due to a broken hand or using a tablet instead of a desktop computer," but pointed out that "such a change would more likely be due to fraud." In any case, a legitimate user will have no trouble providing additional authentication.
The Flip Side
Once IBM's technology has built its profile of your usual ways of interacting with websites, a hacker pretending to be you won't match the profile. But what if you want to surf the web anonymously? You can route your traffic through a world-spanning series of TOR servers, but it seems to me that you could still be identified using that same profile.
It wouldn't surprise me at all to hear that the NSA is licensing this technology. And how about those data brokers that the FTC wants to rein in? I'm sure they'd love the ability to identify you online just based on your distinctive characteristics.
I don't think we can put this genie back in the bottle. Yes, it will definitely help protect us against identity theft and fraudulent transactions. But it's another step closer to a world where anonymity no longer exists.

“Hacking” cars on the road wirelessly is easy, claims expert

It is perfectly possible to “hack” a car while it is driving on the road, seize control, and force the vehicle into a fatal crash, says a car security specialist, speaking to Network World.
Security researchers have demonstrated such hacks using wired systems or short-range wireless such as Bluetooth, but Toucan Systems claims that attacks can be conducted from half the world away, from a computer at a desk.
Jonathan Brossard, quoted by the Sydney Morning Herald, “does not know of a car that has been hacked on the road but says his company does it for vehicle manufacturers in Europe.”
”The vehicle is remote from me. I am sitting at the desk and I am using the computer and driving your car from another country. I am saying it is possible. A car is, technically speaking, very much like a cell phone and that makes it vulnerable to attack from the internet. An attack is not unlikely.”
A report by CNN Money describes the security of “connected” cars as simply behind the times. CNN describes the 50 to 100 computers controlling steering, acceleration and brakes in the typical automobile as “really dumb” – and says “there’s a danger to turning your car into a smartphone on wheels”.
“Auto manufacturers are not up to speed,” said Ed Adams, a researcher at Security Innovation, speaking to CNN Money. “They’re just behind the times. Car software is not built to the same standards as, say, a bank application. Or software coming out of Microsoft.”
The report claims that the next generation of cars from both Audi and Tesla will be wirelessly connected to the internet via AT&T – and thus much more vulnerable.
Writing about a demo at the Blackhat conference in Las Vegas last year, ESET malware researcher Cameron Camp said, “Traditionally, cars have had rudimentary computing systems, implemented to carry out fixed tasks like measuring fuel for injection, making your transmission shift more smoothly under gentle acceleration or to improve gas mileage – things like that.
But with some manufacturers hoping to roll out location-aware browser-based or embedded information systems, can scams be far behind?”
The CNN Money report compared the 145,000 lines of computer code used in the spaceship that put men on the moon, Apollo 11, with the average modern automobile, which has 100 million.
Last year, Senator Edward J Markey, Democrat, Massachussets, pointed out in a publicly available letter to 20 auto manufacturers that average cars now have up to 50 electronic control units, often controlled by a car “network, and that manufacturers had a duty to protect consumers against hackers.
The open letter has ignited a spate of commentary, with Market Oracle describing the crime as “cyberjacking”, and pointing out that the average family car contains 100 million lines of computer code, and that software can account for up to 40% of the cost of the vehicle, according to researchers at the University of Wisconsin-Madison.
Hacks against cars have been demonstrated before – but thus far, have relied on attackers having physical access to the vehicles. At the DefCon conference this year, two researchers showed how they could seize control of two car models from Toyota and Ford by plugging a laptop into a port usually used for diagnostics, as reported by We Live Security here.
So far, though, attacks where vehicles are “taken over” wirelessly have not been widely demonstrated.
“At the moment there are people who are in the know, there are nay-sayers who don’t believe it’s important, and there are others saying it’s common knowledge but right now there’s not much data out there,” said Charlie Miller, one of the ‘car hackers’ at Defcon. “We would love for everyone to start having a discussion about this, and for manufacturers to listen and improve the security of cars.”
“As vehicles become more integrated with wireless technology, there are more avenues through which a hacker could introduce malicious code, and more avenues through which a driver’s basic right to privacy could be compromised,” Senator Markey wrote. “These threats demonstrate the need for robust vehicle security policies to ensure the safety and privacy of our nation’s drivers.
Markey argues that car companies should use third parties to test for wireless vulnerabilities, and should assess risks related to technologies purchased from other manufacturers.
A report by CNBC earlier this year described some of these threats in detail, describing car-hacking as “the new global cybercrime.”

Thousands of ex-workers in IT “still have password” for old jobs

Ex-employees often still have full access to the network of their previous employer, leaving the company open to “revenge attacks” – or just practical jokes.
Around 13% of employees boast that they still have full access to the systems of their previous employers, Help Net Security reports. Many others have access to two or three. Of those who had acesss to one previous employer, 16% of those polled said that they could access the systems of everyone they had ever worked for.
The survey was conducted at a recent IT conference, and 270 current IT workers answered the questionnaire. Workers who still had access to one previous employer didn’t stop there, the survey found.
Of those who had access to one previous employer’s systems, nearly a quarter could still access their last two employers, and 16% said they could access the network of every company they ever worked for.
Shockingly, while 84% of companies had strict policies on allowing contractors permanent access to networks, 16% did not, the report said.
Leaked credentials are often the “key” to large-scale data breaches. In Verizon’s report on corporate security in 2013, We Live Security reported that more than three-quarters (76%) of network intrusions relied on weak or stolen credentials – a risk that Verizon describes as “easily preventable”.
Philip Lieberman, CEO and President of Lieberman Software, said: “The results of this research shows that a fundamental lack of IT security awareness in enterprises, particularly in the arena of controlling privileged logins, is potentially paving the way for a further wave of data breaches.”
“Organizations must implement a policy where privileged account passwords are automatically updated on a frequent basis, with unique and complex values. That way, when an employee does leave the company, he is not taking the password secrets that can gain access to highly sensitive systems.”

England footballers have their passport details leaked on Twitter

In an embarrassing breach of security, the passport numbers of members of the England Football squad have been accidentally tweeted out by the team’s official sponsor.
The information was included on an official FIFA team sheet, shared with members of the press one hour before the English team played a friendly match against Ecuador at the Sun Life Stadium in Miami.
Unfortunately England’s corporate sponsor Vauxhall clearly didn’t realise that the passport numbers might be sensitive, and excitedly tweeted out a smartphone photo of the line-up to ardent soccer fans.
The photograph showed the names, dates of birth, and passport numbers of England’s starting line-up of eleven players and the seven substitutes. Oops. Something of an own goal, there.
The picture included the players' dates of birth and passport numbers (redacted above)
The picture included the players’ dates of birth and passport numbers (redacted above)
The player’s dates of birth are easy for anybody to find with a little help from Wikipedia, but it doesn’t feel right reproducing them here – so I have redacted them in the image above as well as the passport numbers which clearly shouldn’t be in the public domain.
Vauxhall quickly realised its blunder, and deleted its tweet.
But, of course, the internet doesn’t work like that. Once you publish anything on the internet there is no guarantee that you will be able to remove every trace of it – especially if you directly tweeted it to thousands of avid football fans.
We all have to learn to be more careful about what we share on the internet – and think before we tweet.
At least former England football captain Gary Lineker had something amusing to say on the subject.
(There’s always a first time, Gary)
I’m sure none of us would like our passport details to become public knowledge, as there is always a chance that an identity fraudster might take advantage of the information for their own malicious purposes.
Bad enough for you or me – but imagine how much more tempting it might be for criminals to exploit the information when it relates to somebody who earns £125,000 per week.
No doubt, however, most of those handsomely-paid players (on the English side at least, I have no idea what kind of salaries footballers command in Ecuador) will have a minion who can organise a new passport for them should it be felt to be required.
The English Football Association (FA) says the data leak is nothing to do with them, and pointed the finger of blame at the match’s organisers:
“It is a matter for the match organisers, the publication and distribution of the team sheets are their responsibility.”
This isn’t, of course, the first time that FIFA has been connected with an alleged security breach involving passport information.
In August 2010, the Norwegian newspaper Dagbladet claimed that the details of 250,000 fans who had attended the 2006 FIFA World Cup in Germany had been sold on to ticket touts, including the passport details of 35,689 UK ticket purchasers.
According to reports at the time, the alleged data leak was blamed on a rogue employee at FIFA’s official ticketing agency, although investigators from the UK’s Information Commissioner’s Office (ICO) later asserted that there was no evidence that British passport holders had been exposed.
For those who care about such things, the England-Ecuador match ended as a 2-2 draw.
But there were definitely losers: the players who had their personal information needlessly shared with the world via Twitter.

NSA faces fresh revelations as Snowden anniversary arrives

Edward Snowden’s public revelations of mass surveillance conducted by the U.S. National Security Agency began one year ago today: June 5, 2103. Since then, the scope of the revelations has expanded to cover activities by the UK’s GCHQ, efforts to weaken encryption, and the spread of malicious code by the NSA, including malware implanted in IT hardware as it being shipped to customers from manufacturers like Cisco. Revelations continued this past weekend with a look at how the NSA looks at people’s faces.

Pre-Snowden and Post-Snowden

Photo from an NSA PowerPoint slide showing a Cisco product box being opened in preparation for an “implant” of malicious code, without the knowledge of Cisco or its customer. Read on for a chart of Cisco’s stock since 6/5/13 compared to the NASDAQ
I don’t think it’s hyperbolic to predict that the history of computer security and data privacy will henceforth be referred to as two eras, pre-Snowden and post-Snowden. Frankly, the increase in general public awareness of, and interest in, a whole raft of security and privacy related issues over the last 12 months has been staggering.
As regular readers of these pages will know, we’ve been striving to raise security and privacy awareness for years, along with many of our colleagues in industry and non-profit organizations. Then suddenly, essentially through the actions of one person, people everywhere want to know more. Without making any value judgments about Snowden’s actions it is hard to deny that he has done more to raise awareness of digital security and privacy issues than anyone else, ever.
Ironically, much of what Snowden revealed was not exactly news to folks who have been in the information security business for a while, people who have read the works of Jim Bamford, or met with earlier whistleblowers such as William Binney, or have friends who worked at NSA and related agencies like the NRO (historically more heavily funded than the NSA).
What had been lacking before Snowden was widespread interest in what these agenices were up to in the realm of mass surveillance, malware distribution, and weakening of encryption. Apparently, the world was waiting for convincing documentation, documents of a type and quantity that the government could not deny, namely a bunch of PowerPoint slides! (Again, the irony is not lost on those of us who have spent countless hours creating hundreds of security awareness slides of our own over the last 20 years — yes, PowerPoint is that old.)
Something about seeing those slides, which often expressed the great enthusiasm with which the agency seemed to be pursuing “all personal data from everywhere”, connected with many people who had previously preferred not to think about these things. However, powerful as they were, those slides were not the only pictures that made an impact. Consider this chart of the price of Cisco shares relative to the index of the NASDAQ on which it trades. Suspicion around the integrity of Cisco products, raised by revelations about several different NSA/GCHQ programs, took their toll.
The price of Cisco stock (CSCO) versus the NASDAQ since June 5, 2013

Historic impacts

It was on June 5, 2013, that the Guardian newspaper put the first story online: NSA collecting phone records of millions of Verizon customers daily. As you can see from the date on that page, the story first appeared in print on June 6, but the paper’s own NSA timeline records the June 5 electronic publication. The PRISM story, the one that showed surveillance cooperation with NSA by tech companies like Google, Apple, and Facebook, broke on the 6th. (If you want the hour-by-hour narrative of how the documents came to be published, read “No Place to Hide” by Glenn Greenwald, it’s fascinating stuff.)
The effects of those articles, and the many others that followed, often illustrated with classified PowerPoint slides, are too numerous for one blog post to cover. However, a number of articles on We Live Security have addressed several different impacts, starting with changes in Internet behavior. ESET conducted a survey on this in the fall of 2013 and published the results:
Many of our original findings were reinforced in 2014 when we ran a larger survey with Harris:
I discussed the survey findings in a pair of podcasts:
To find that a growing number of people are, because of the Snowden/NSA revelations, reluctant to bank or shop online, or even use email, points to a potentially serious erosion of trust in the technology that powers much of the world’s economy. These trends spell trouble for many sectors, not just banking and retailing. Consider healthcare, where increased use of Internet-based communications is a key element in many cost control models. If people lose trust in the ability to communicate privately over the Internet, those models won’t work.
The revelations about attempts by NSA and GCHQ to weaken encryption standards and technologies also merited a blog post. I felt compelled to urge people not to stop using encryption in: Encryption advice for companies in the wake of Snowden NSA revelations.
And of course, ESET responded to questions about how antivirus companies deal with government malware. When you read ESET response to Bits of Freedom open letter on detection of government malware, you may detect some frustration with the questions. That’s because it really makes no business sense for an antivirus product to give a pass to any particular piece of malicious code, even “righteous malware” deployed for what someone considers to be a good cause. Not to mention that AV companies come in many different national flavors (for example, ESET is headquartered in Slovakia, but has a presence in more than 180 countries).
A different Snowden/NSA impact, one of potentially greater concern, was summed up a single word in a speech I heard yesterday about cyber conflict. (I’m currently attending CyCon, the annual conference of the NATO Cooperative Cyber Defence Centre of Excellence (CCD CoE) in Tallinn, Estonia.) That word was Suspicion, and the speaker was Dr. Jarno Limnéll, the Director of Cyber Security at Intel Security.
For Limnéll, a former career officer in the Finnish Defense Forces, suspicion is the largest single obstacle to cooperation between allies in cyber defense, cooperation that is essential as the threat of cyber conflict escalates (a threat that somehow feels very immediate when you’re sitting in Estonia). It is fair to say that the Snowden revelations did nothing to lessen suspicion between allies, and probably a great deal to deepen it.

Facing the future

With apologies for the pun in my heading, I saved the latest revelations for last, namely the story about the NSA’s use of facial recognition technology and the gathering of facial images, reported in the New York Times. Maybe it’s me, but this was not at all surprising. You have to assume both law enforcement and intelligence agencies are working with facial recognition, particularly as there is no real case law or legislation governing such activities in the United States. What would be worrying is potential abuse of mass access to such facial databases as state drivers licenses.
Not surprisingly, the NSA responded with statements that were reported as denials in some publications, as in this headline: NSA says it’s not collecting images of US citizens for facial recognition. Frankly that statement is not the NSA position, unless you qualify it. As Nextgov reported: “The National Security Agency collects and analyzes images of people’s faces as part of its vast surveillance operation, the agency’s director confirmed Tuesday.” But you have to throw “intentionally” in there as a qualifier. The new head of the NSA, Admiral Mike Rogers “insisted that the NSA doesn’t intentionally target facial images of Americans.” Quote:
“We use facial recognition as a tool to help us understand these foreign intelligence targets.”
Rogers also said that the NSA “does not have access to any vast databases of Americans’ facial images, specifically denying that the agency collects pictures from state DMV offices.” Of course, when you read these statements, you probably experience one of the other major impacts of the Snowden revelations and the government’s responses to them: you look for what is not being said. In other words, you’re suspicious. Is the reality that the NSA does not collect pictures from state DMV offices, because the FBI or DoJ does it for them? Does no “access to any vast databases of Americans’ facial images” mean they don’t consider their current access to be vast?
Sadly, unless the U.S. government somehow manages to achieve the right level of transparency and oversight for activities that fall within the NSA remit, crippling suspicion and erosion of trust may well be the legacy of the world learning, via Edward Snowden, about what the NSA and GCHQ have been spending so much taxpayer money on. For now we wait, wondering about the next revelation, which may be the names of Americans on whom the NSA has been spying. That’s probably not going to make anybody feel any better.

Android malware: how to keep your device safe from filecoders (and everything else)

When ESET researchers analyzed the first file-encrypting Trojan to demand a ransom from Android users via a control centre hidden on the anonymized Tor Network, the malware was “somewhat anticipated”, ESET malware researcher Robert Lipovsky writes.
The malware Android/Simplocker, available as a bogus app, seems at present to be a proof-of-concept rather than a fully-fledged attack ready for mass release.
Only last month, Lipovsky reported on an Android worm, Samsapo.A. which displayed as an SMS message with text reading “Это твои фото?” (which is Russian for “Is this your photo?”) and a link to the malicious APK package.
In ESET’s Threat Trends Report predictions for this year, ESET experts warned of “an escalating increase in serious threats targeting Android phones and tablets – ESET detections of such malware increased more than 60% between 2012 and 2013. This trend is predicted to continue in 2014.”
ESET Latin America’s Research Laboratory in Buenos Aires points out that malware afflicting Android now uses classic PC attack methods – the discovery of vulnerabilities, then their exploitation through malicious code.
Thankfully, most of these threats can be avoided by sensible use of your device. Robert Lipovsky writes, “We encourage users to protect themselves against these threats using prevention and defensive measures. Adhering to security best practices, such as keeping away from untrustworthy apps and app sources, will reduce your risks. And if you keep current backups of all your devices then any ransomware or Filecoder trojan – be it on Android, Windows, or any operating system – is nothing more than a nuisance.”

Install ALL apps from Google Play or other big-name app stores unless you have a good reason not to

There are good reasons to install apps from outside Google’s Play Store (or other big-brand stores such as Amazon’s) – for instance, if your employer requires you to install a messaging app for work. Otherwise, don’t. Third-party stores, particularly those offering big-name apps for free are generally infested with malware, and downloading apps from them is a good way to get infected. If you HAVE to install a file from an unknown source, ensure your device is set to automatically block such installations afterwards.

Don’t assume you’re safer on your Android

“Stay alert and don’t fall for common social engineering tricks,” says Lipovsky. Links, downloads and attachments can be just as risky on Android as they can on PC. It’s easy to assume that, for instance, opening emails on Android isn’t as risky as it can be on PC – but while Android malware is still rarer than the PC variety, phishers, for instance, may direct you to a fake website to harvest private information just as easily on an Android phone.

If posssible, don’t use any old ‘Droid

In an ideal world, you should use a new phone, running the latest version of Android – KitKat. Older versions are less secure – and your operator may not issue an upgrade for your handset, even if Google does. ESET Senior Research Fellow Righard J. Zwienenberg wrote last year, in response to a vulnerability  “The biggest problem for consumers is the enormous number of old phones running Android that are still in use, for which the operators will not release a new version. Many phones still run the very popular, but outdated, Gingerbread Android platform. Regardless of whether Google releases patches for these  versions, the phones will remain vulnerable.”

 Ensure you are running the latest update of Android available for your device

Updates from Google should be available OTA (over the air) – and on newer phones, you should be able to set your phone to auto-update (with a restriction to do so via Wi-Fi rather than cellular networks). The area under Settings where you can alter these settings varies by manufacturer (on Samsung’s S5, it’s under About Device), but the menu option you need is Software Update. Select the first menu option to check you are running the latest version, and if not download and update it immediately.

Do the basics – lock your phone

If you own the very latest handsets such as Samsung or HTC’s flagships, you might have the luxury of locking your phone with up to three fingerprints using a built-in scanner- but if not, there’s no excuse for not locking it with either a  PIN, or, ideally a password.  Settings > Security > Screen Lock. On new devices, you’ll usually get a choice of pattern, PIN, or password. A pattern’s less secure than a PIN, and a password is your best choice. If you’re using your tablet or smartphone for business, be extra careful. Talk to your IT department, and read our guide to encrypting data on Android here.

Don’t keep your valuables on your device

Lipovsky says, “If you keep current backups of all your devices then any ransomware or Filecoder trojan – be it on Android, Windows, or any operating system – is nothing more than a nuisance. “Backup your phone when possible – either manually, by connecting to a PC, or by using your manufacturer’s auto-backup (Samsung accounts, for instance, will allow you to back up phones). Use apps such as Google Drive or Dropbox to ensure data – like photographs – is not solely stored on the device.

Lock off apps which might give away information

Apps such as Dropbox can contain information that is very useful to cybercriminals – a passport scan or a photograph of a credit card, for instance. There are various options for hiding and locking apps – the free App Locker remains highly popular, despite its slightly annoying adware which inserts pop-up ads throughout the OS. Download from Google Play, and lock off sensitive apps – messaging, email, social networking, file storage, banking – behind a PIN or password.

Inspect every app’s permissions before

When installing an Android app, you will see a list of “Permissions” – functions the app is allowed to access. Permissions such as “Full network access” or the ability to send and receive SMSs should make you think hard about installing the app. It’s not a guarantee the app is malicious – Facebook’s list of Permissions is long and alarming – but particularly when attached to a screensaver, clock, or other app which has no logical reason to need communications abilities, this should be taken as a warning that you might be dealing with malware.

Use a mobile security app

Android malware used to be dismissed as a myth – or largely an annoyance designed to run up bills via premium SMS messages. The discovery of PC-like malware such as Android/Simplocker shows just how fast malware is evolving for Google’s devices – and how like its PC cousins it’s becoming. Google’s own policing of its Play Store has improved hugely, but for peace of mind, a regular malware scan of your device is recommended. ESET’s Lipovsky says, “A mobile security app such as ESET Mobile Security for Android will keep malware off your device.” Set the app to scan your phone regularly and automatically.

Use Google’s own defenses to the full

Google offers a pretty decent selection of security features built in – including a location tracker, which can help find a lost device. Visit Google’s Android Device Manager page to activate it while logged into your Google account and you’ll be able to force a device on silent mode to ring, remote-lock a device, and view its location on a map. If you own several Androids, you’ll be able to see them all. More advanced protection is offered by AV programs such as ESET’s Mobile Security and Antivirus, but Google’s own, rolled out quietly to any users of Android 2.2 and above last autumn, is a good first step.

Never pay a ransomware author

While the implementation of the encryption in Android/Simplocker is clumsy compared to notorious PC malware such as Cryptolocker, it can still effectively destroy files. Lipovsky advises that the one thing users must not do is pay up, “The malware is fully capable of encrypting the user’s files, which may be lost if the encryption key is not retrieved. While the malware does contain functionality to decrypt the files, we strongly recommend against paying up – not only because that will only motivate other malware authors to continue these kinds of filthy operations, but also because there is no guarantee that the crook will keep their part of the deal and actually decrypt them.”

Hackers Infiltrate Desk Phones for Epic Office Pranks

Photo: Andy Greenberg/WIRED
An office deskphone hacked via ethernet to show an image on its screen. The phone has been covered in electrical tape and paper to obscure its model. Photo: Andy Greenberg/WIRED
A workplace tip: If you’re planning an office prank war, don’t target someone with the skills to reverse-engineer and control the phone on your desk.
That’s the lesson of a demonstration hackers Brandon Edwards and Ben Nell have planned for the Summercon security conference in New York today. After months of research that began with Edwards’ quest to avenge a coworker’s hazing, Edwards and Nell found vulnerabilities in a common desktop telephone that let them take control of it from any computer on the local network. With the phone fully under their command, they’ve made it perform mischief ranging from playing audio files to displaying pictures of their choosing.
Good natured pranks aside, their work shows the potential for more nefarious hacks like surreptitiously recording conversations or sniffing traffic from a connected PC.
“It’s a relatively simple device once you’re inside of it,” says Edwards. “We can make it do pretty much anything a phone can do.”
When Edwards started his job as a researcher at cloud security firm SilverSky in January, he says, a coworker sent a lewd email as a prank, then claimed the note was written by someone who’d accessed his keyboard. Edwards says he responded by spoofing an email from that guy to his boss, seeking enrollment in an HR training class on sexual harassment.
Still, Edwards wasn’t satisfied, however, and began daydreaming about a more epic retaliation involving the phone on his coworker’s desk. He called up his friend Nell, a security researcher and reverse engineering guru who immediately hit eBay to order the same phone used in Edwards’ office. Working together, Nell and Edwards found a debugging port on the back of the phone, spliced a connection to their laptops, and dumped the device’s memory. They soon discovered, as Nell puts it, “a mountain of bugs.”
“It was like you were in a room full of bugs, and you couldn’t not step on them,” he says. Among the plentiful coding errors was one that allowed them to execute what’s known as a buffer overflow, a type of exploit that allows them to write into the phone’s memory and execute arbitrary commands without any limits to their user privileges.
Nell and Edwards asked WIRED to withhold the name of the phone vendor whose coding flaws they uncovered and say they won’t reveal it during their demonstration. They have not yet told the manufacturer about their tests and would like to avoid generating controversy for their employers. But Edwards speculates that the phone they targeted isn’t any more vulnerable than others; most desktop phone manufacturers, he says, depend upon the obscurity of their code, as opposed to any real security, to keep hackers at bay. “Everyone from Cisco to Polycom to Avaya to Shoretel likely have similar issues,” he says.
In a preview of their conference demonstration for WIRED, Nell and Edward showed that they were able to hijack the target phone—with only an ethernet connection to their laptop—to simulate the hijinks they might inflict on a coworker. (A portion of that demonstration is shown in the video above, with electrical tape used to mask the phone’s brand and model.) They typed text that appeared on the phone’s screen, writing “Knock, knock, neo” in a Matrix reference. They made the phone display images like a skull and a smiley face. They played audio files like “shall we play a game?” from the 1983 film War Games. For a creepy finale, they had the phone play a 30-second clip of my own voice pulled from YouTube.
Nell and Edwards say they’ve only started exploring what else they’re able to do with the phone, but believe they could use it for tricks with less prankish security consequences, like turning on its speakerphone mic to record audio while disabling the LED indicator that might alert users. They also point out that many offices simplify their networking setup by plugging computers’ ethernet cables into deskphones instead of wall ports. Install spyware on the phone, and you could likely use it to eavesdrop on all the traffic sent to and from a connected PC. “If you’re able to get onto a device like this and execute whatever code you want, you can turn it into a personal network tap,” says Nell.
All of those attacks, Nell and Edwards admit, would first require access to the company’s internal network. But if a hacker could gain an initial foothold, say by sending a spear phishing email with a malware-laden link that took over a staffer’s computer, a vulnerable deskphone might make a useful secondary target in that spying campaign.
Edwards, meanwhile, is still limiting his phone-hacking targets to his coworkers. He’s still planning to hijack the deskphone of his officemate as soon as his exploit is perfected, and says he’s even received permission from his company’s senior executives. Some unwitting sales guy is in for a nasty surprise.