Wednesday, 7 August 2013

Working as an ethical hacker

The term "ethical hacker" as it is used today is, if you ask me, somewhat imprecise. After all, a hacker in it for the money could be said to follow his or her own moral compass on what is right and what is wrong - the only difference is that those ethics aren't compatible with those held by most people.
The majority of people in the security industry use the term interchangeably with the term "white hat" - a computer hacker that performs all kinds of penetration testing against an organization's information systems, at the behest of that same organization and so that it can secure those systems against black hats (straight up "bad guys" hacking for money) and grey hats.
John Yeo, EMEA Director at Trustwave, is one of those. "An ethical hacker or penetration tester is someone who is an expert practitioner when it comes to using the same tools and techniques as the bad guys do, but in a controlled manner, within a professional services wrapper," he says.
On the other hand, freelance security tester, researcher and developer Robin Wood has a more flexible look at things.
"I would say if you are ethical you are doing things without malicious intent," he says. "It doesn't mean you always stay strictly within the letter of the rules or law but when you stray outside you do it out of curiosity rather than any desire to do harm."
Technically, his idea of what an ethical hacker is closer to the meaning of the term "grey hat" - a hacker that may act illegally, but is mostly motivated by the challenge of breaking into systems and by the wish to help organizations with inadequate defenses to strengthen their security posture. Sometimes, the grey hat might want to be compensated for his or her trouble.
In short, the technical difference between a white hat and a grey hat is the former has secured an organization's permission to test and attack their systems, and the latter has not.
But what I get from both of my collocutors is that an inquisitive mind, flexibility and a constant desire to ask "but what if?" is crucial for any hacker, as well as the ability to think like a bad guy.
"You need to have a desire to know how things work and what happens when you ask them to do things they were never designed to do," says Wood.
Yeo definitely fits that definition. "I spent my youth playing with computers and had a fascination with any kind of gadgets and electronics – wanting to know how and why they worked like they do, to the point where you can make it do something it wasn’t designed to do," he shared with Help Net Security. "Being fortunate enough to have a computer at home from a young age helped, as well as being an early internet adopter."
Wood also started hacking while still underage. "I got started playing around on the high school network with a friend. Luckily the IT teacher, who also ran the network, didn't mind us exploring and put up with it in return for us helping her out with admin chores," he says.
Other similarities between the two is that they both chose to pursue a Computer Science degree ("I thought that would be a lot more inspiring than it actually was," Yeo admits) and then went on to become penetration testers (Wood did a nine year stint as a desktop and web app developer before turning to security research and pentesting).
As one of the regional directors at Trustwave SpiderLabs, Yeo is now responsible for running a team of consultants across multiple countries in the EMEA region.
"The majority of my time is spent meeting with customers, I don’t conduct very much ethical hacking or forensic investigation work myself anymore, I instead have the pleasure of scouring the region for top of the market talent, hiring them into the team and providing a fun environment that retains great people," he explained, but added that it can be very hard to resist getting involved when one of the team has something really interesting going on.
Wood, on the other had, is currently an independent contractor. "I freelanced for a few years then contracted with a local security firm for a year then moved to full time with them for two years," he said. "I went back freelancing last July and have been ran off my feet with work since."
To prepare himself for his chosen profession, he has done a couple of SANS courses and various other independent ones.
"I find it hard to learn unless I'm in a classroom or have a very focused reason for learning so don't work well with normal home learning courses. Having said that, the ones that have engaged me more than most are the ones from SecurityTube," he shared.
The goals that he wants to achieve with ethical hacking is to expand his own knowledge and that of people around him. "I enjoy teaching and love seeing people get a glimpse into the security world, whether it is popping their first shell or realizing how many email addresses their company is leaking through Google."
Yeo is more concentrated on growing the breadth and depth of capability of Trustwave SpiderLabs' global team. The team itself has been growing pretty quickly, he noted, and it now includes over 100 ethical hackers and security researchers, which make around 10 percent of the entire company workforce around the globe. And they have been busy - all in all, they performed a little over 2500 penetration testing engagements in 2012.
When it comes to knowledge, tools and techniques used by ethical hackers, they don't differ that much from those employed by black hats.
"Some people swear that you can't be a real pen-tester if you don't use BackTrack/Kali, I'd say that is rubbish," says Wood. "I'm currently on a test where the best tool is a web browser and a JavaScript console, the next job may require Linux command line tools, the one after that may be MS SQL Server so I can connect to and audit SQL databases. Having said all that, one tool I use on nearly every job is Dradis to keep my notes together and to help when report writing."
The report writing is definitely one thing that white hats are doing more than black hats, and is part of their responsibilities to those who hired them.
"It’s important for the ethical hacker to be a good consultant - there's little use being a highly skilled penetration tester if you’re not able to convey the specific technical details of complex vulnerabilities in a coherent manner," says Yeo. "The ultimate objective is helping customers understand their risk and help them secure their data; being a technically gifted and committed penetration tester is only part of the journey."

NSA & Breaking Iranian Codes

Ahmed Chalabi is accused of informing the Iranians that the U.S. had broken its intelligence codes. What exactly did the U.S. break? How could the Iranians verify Chalabi's claim, and what might they do about it?
This is an attempt to answer some of those questions.
Every country has secrets. In the U.S., the National Security Agency has the job of protecting our secrets while trying to learn the secrets of other countries. (Actually, the CIA has the job of learning other countries' secrets in general, while the NSA has the job of eavesdropping on other countries' electronic communications.)
To protect their secrets, Iranian intelligence -- like the leaders of all countries -- communicate in code. These aren't pencil-and-paper codes, but software-based encryption machines. The Iranians probably didn't build their own, but bought them from a company like the Swiss-owned Crypto AG. Some encryption machines protect telephone calls, others protect fax and Telex messages, and still others protect computer communications.
As ordinary citizens without serious security clearances, we don't know which machines' codes the NSA compromised, nor do we know how. It's possible that the U.S. broke the mathematical encryption algorithms that the Iranians used, as the British and Poles did with the German codes during World War II. It's also possible that the NSA installed a "back door" into the Iranian machines. This is basically a deliberately placed flaw in the encryption that allows someone who knows about it to read the messages.
There are other possibilities: the NSA might have had someone inside Iranian intelligence who gave them the encryption settings required to read the messages. John Walker sold the Soviets this kind of information about U.S. naval codes for years during the 1980s. Or the Iranians could have had sloppy procedures that allowed the NSA to break the encryption.
Of course, the NSA has to intercept the coded messages in order to decrypt them, but they have a worldwide array of listening posts that can do just that. Most communications are in the air-radio, microwave, etc. -- and can be easily intercepted. Communications via buried cable are much harder to intercept, and require someone inside Iran to tap into.
But the point of using an encryption machine is to allow sending messages over insecure and interceptible channels, so it is very probable that the NSA had a steady stream of Iranian intelligence messages to read.
Whatever the methodology, this would be an enormous intelligence coup for the NSA. It was also a secret in itself. If the Iranians ever learned that the NSA was reading their messages, they would stop using the broken encryption machines, and the NSA's source of Iranian secrets would dry up. The secret that the NSA could read the Iranian secrets was more important than any specific Iranian secrets that the NSA could read.
The result was that the U.S. would often learn secrets they couldn't act upon, as action would give away their secret. During World War II, the Allies would go to great lengths to make sure the Germans never realized that their codes were broken. The Allies would learn about U-boat positions, but wouldn't bomb the U-boats until they spotted the U-boat by some other means...otherwise the Nazis might get suspicious.
There's a story about Winston Churchill and the bombing of Coventry: supposedly he knew the city would be bombed but could not warn its citizens. The story is apocryphal, but is a good indication of the extreme measures countries take to protect the secret that they can read an enemy's secrets.
And there are many stories of slip-ups. In 1986, after the bombing of a Berlin disco, then-President Reagan said that he had irrefutable evidence that Qadaffi was behind the attack. Libyan intelligence realized that their diplomatic codes were broken, and changed them. The result was an enormous setback for U.S. intelligence, all for just a slip of the tongue.
Iranian intelligence supposedly tried to test Chalabi's claim by sending a message about an Iranian weapons cache. If the U.S. acted on this information, then the Iranians would know that its codes were broken. The U.S. didn't, which showed they're very smart about this. Maybe they knew the Iranians suspected, or maybe they were waiting to manufacture a plausible fictitious reason for knowing about the weapons cache.
So now the NSA's secret is out. The Iranians have undoubtedly changed their encryption machines, and the NSA has lost its source of Iranian secrets. But little else is known. Who told Chalabi? Only a few people would know this important U.S. secret, and the snitch is certainly guilty of treason. Maybe Chalabi never knew, and never told the Iranians. Maybe the Iranians figured it out some other way, and they are pretending that Chalabi told them in order to protect some other intelligence source of theirs.
During the 1950s, the Americans dug under East Berlin in order to eavesdrop on a communications cable. They received all sorts of intelligence until the East Germans discovered the tunnel. However, the Soviets knew about the operation from the beginning, because they had a spy in the British intelligence organization. But they couldn't stop the digging, because that would expose George Blake as their spy.
If the Iranians knew that the U.S. knew, why didn't they pretend not to know and feed the U.S. false information? Or maybe they've been doing that for years, and the U.S. finally figured out that the Iranians knew. Maybe the U.S. knew that the Iranians knew, and are using the fact to discredit Chalabi.
The really weird twist to this story is that the U.S. has already been accused of doing that to Iran. In 1992, Iran arrested Hans Buehler, a Crypto AG employee, on suspicion that Crypto AG had installed back doors in the encryption machines it sold to Iran -- at the request of the NSA. He proclaimed his innocence through repeated interrogations, and was finally released nine months later in 1993 when Crypto AG paid a million dollars for his freedom -- then promptly fired him and billed him for the release money. At this point Buehler started asking inconvenient questions about the relationship between Crypto AG and the NSA.
So maybe Chalabi's information is from 1992, and the Iranians changed their encryption machines a decade ago.
Or maybe the NSA never broke the Iranian intelligence code, and this is all one huge bluff.
In this shadowy world of cat-and-mouse, it's hard to be sure of anything.