Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

September 19 2017


IDA 7.0: Qt 5.6.0 configure options & patch

A handful of our users have already requested information regarding the Qt 5.6.0 build, that is shipped with IDA 7.0.

Configure options

Here are the options that were used to build the libraries on:

  • Windows: ...\5.6.0\configure.bat "-nomake" "tests" "-qtnamespace" "QT"
    "-confirm-license" "-accessibility" "-opensource" "-force-debug-info" "-platfor
    " "win32-msvc2015" "-opengl" "desktop" "-prefix" "C:/Qt/5.6.0-x64"
    • Note that you will have to build with Visual Studio 2015, to obtain compatible libs
  • Linux: .../5.6.0/configure "-nomake" "tests" "-qtnamespace" "QT" "-confirm-license" "-accessibility" "-opensource" "-force-debug-info" "-platform" "linux-g++-64" "-developer-build" "-fontconfig" "-qt-freetype" "-qt-libpng" "-glib" "-qt-xcb" "-dbus" "-qt-sql-sqlite" "-gtkstyle" "-prefix" "/usr/local/Qt/5.6.0-x64"
  • Mac OSX: .../5.6.0/configure "-nomake" "tests" "-qtnamespace" "QT" "-confirm-license" "-accessibility" "-opensource" "-force-debug-info" "-platform" "macx-g++" "-debug-and-release" "-fontconfig" "-qt-freetype" "-qt-libpng" "-qt-sql-sqlite" "-prefix" "/Users/Shared/Qt/5.6.0-x64"


In addition to the specific configure options, the Qt build that ships with IDA includes the following patch. You should therefore apply it to your own Qt 5.6.0 sources before compiling, in order to obtain similar binaries.

Note that this patch should work without any modification, against the 5.6.0 release as found there. You may have to fiddle with it, if your Qt 5.6.0 sources come from somewhere else.

September 10 2017


Is this research ethical?

The Economist features face recognition on its front page, reporting that deep neural networks can now tell whether you’re straight or gay better than humans can just by looking at your face. The research they cite is a preprint, available here.

Its authors Kosinski and Wang downloaded thousands of photos from a dating site, ran them through a standard feature-extraction program, then classified gay vs straight using a standard statistical classifier, which they found could tell the men seeking men from the men seeking women. My students pretty well instantly called this out as selection bias; if gay men consider boyish faces to be cuter, then they will upload their most boyish photo. The paper authors suggest their finding may support a theory that sexuality is influenced by fetal testosterone levels, but when you don’t control for such biases your results may say more about social norms than about phenotypes.

Quite apart from the scientific value of the research, which is perhaps best assessed by specialists, I’m concerned with the ethics and privacy aspects. I am surprised that the paper doesn’t report having been through ethical review; the authors consider that photos on a dating website are public information and appear to assume that privacy issues simply do not arise.

Yet UK courts decided, in Campbell v Mirror, that privacy could be violated even by photos taken on the public street, and European courts have come to similar conclusions in I v Finland and elsewhere. For example, a Catholic woman is entitled to object to the use of her medical record in research on abortifacients and contraceptives even if the proposed use is fully anonymised and presents no privacy risk whatsoever. The dating site users would be similarly entitled to object to their photos being used in research to which they might have an ethical objection, even if they could not be identified from their photos. There are surely going to be people who object to research in any nature vs nurture debate, especially on a charged topic such as sexuality. And the whole point of the Economist’s coverage is that face-recognition technology is now good enough to work at population scale.

What do LBT readers think?

August 26 2017


Is the City force corrupt, or just clueless?

This week brought an announcement from a banking association that “identity fraud” is soaring to new levels, with 89,000 cases reported in the first six months of 2017 and 56% of all fraud reported by its members now classed as “identity fraud”.

So what is “identity fraud”? The announcement helpfully clarifies the concept:

“The vast majority of identity fraud happens when a fraudster pretends to be an innocent individual to buy a product or take out a loan in their name. Often victims do not even realise that they have been targeted until a bill arrives for something they did not buy or they experience problems with their credit rating. To carry out this kind of fraud successfully, fraudsters need access to their victim’s personal information such as name, date of birth, address, their bank and who they hold accounts with. Fraudsters get hold of this in a variety of ways, from stealing mail through to hacking; obtaining data on the ‘dark web’; exploiting personal information on social media, or though ‘social engineering’ where innocent parties are persuaded to give up personal information to someone pretending to be from their bank, the police or a trusted retailer.”

Now back when I worked in banking, if someone went to Barclays, pretended to be me, borrowed £10,000 and legged it, that was “impersonation”, and it was the bank’s money that had been stolen, not my identity. How did things change?

The members of this association are banks and credit card issuers. In their narrative, those impersonated are treated as targets, when the targets are actually those banks on whom the impersonation is practised. This is a precursor to refusing bank customers a “remedy” for “their loss” because “they failed to protect themselves.”
Now “dishonestly making a false representation” is an offence under s2 Fraud Act 2006. Yet what is the police response?

The Head of the City of London Police’s Economic Crime Directorate does not see the banks’ narrative as dishonest. Instead he goes along with it: “It has become normal for people to publish personal details about themselves on social media and on other online platforms which makes it easier than ever for a fraudster to steal someone’s identity.” He continues: “Be careful who you give your information to, always consider whether it is necessary to part with those details.” This is reinforced with a link to a police website with supposedly scary statistics: 55% of people use open public wifi and 40% of people don’t have antivirus software (like many security researchers, I’m guilty on both counts). This police website has a quote from the Head’s own boss, a Commander who is the National Police Coordinator for Economic Crime.

How are we to rate their conduct? Given that the costs of the City force’s Dedicated Card and Payment Crime Unit are borne by the banks, perhaps they feel obliged to sing from the banks’ hymn sheet. Just as the MacPherson report criticised the Met for being institutionally racist, we might perhaps describe the City force as institutionally corrupt. There is a wide literature on regulatory capture, and many other examples of regulators keen to do the banks’ bidding. And it’s not just the City force. There are disgraceful examples of the Metropolitan Police Commissioner and GCHQ endorsing the banks’ false narrative. However people are starting to notice, including the National Audit Office.

Or perhaps the police are just clueless?

August 22 2017


History of the Crypto Wars in Britain

Back in March I gave an invited talk to the Cambridge University Ethics in Mathematics Society on the Crypto Wars. They have just put the video online here.

We spent much of the 1990s pushing back against attempts by the intelligence agencies to seize control of cryptography. From the Clipper Chip through the regulation of trusted third parties to export control, the agencies tried one trick after another to make us all less secure online, claiming that thanks to cryptography the world of intelligence was “going dark”. Quite the opposite was true; with communications moving online, with people starting to carry mobile phones everywhere, and with our communications and traffic data mostly handled by big firms who respond to warrants, law enforcement has never had it so good. Twenty years ago it cost over a thousand pounds a day to follow a suspect around, and weeks of work to map his contacts; Ed Snowden told us how nowadays an officer can get your location history with one click and your address book with another. In fact, searches through the contact patterns of whole populations are now routine.

The checks and balances that we thought had been built in to the RIP Act in 2000 after all our lobbying during the 1990s turned out to be ineffective. GCHQ simply broke the law and, after Snowden exposed them, Parliament passed the IP Act to declare that what they did was all right now. The Act allows the Home Secretary to give secret orders to tech companies to do anything they physically can to facilitate surveillance, thereby delighting our foreign competitors. And Brexit means the government thinks it can ignore the European Court of Justice, which has already ruled against some of the Act’s provisions. (Or perhaps Theresa May chose a hard Brexit because she doesn’t want the pesky court in the way.)

Yet we now see the Home Secretary repeating the old nonsense about decent people not needing privacy along with law enforcement officials on both sides of the Atlantic. Why doesn’t she just sign the technical capability notices she deems necessary and serve them?

In these fraught times it might be useful to recall how we got here. My talk to the Ethics in Mathematics Society was a personal memoir; there are many links on my web page to relevant documents.


A quick post on Wikipedia-scrubbing and a historical document on binary diffing

I am a huge fan of Wikipedia -- I sometimes browse Wikipedia like other people watch TV, skipping from topic to topic and - on average - being impressed by the quality of the articles.

One thing I have noticed in recent years, though, is that the base-democratic principles of Wikipedia open it up to manipulation and whitewashing - Wikipedia's guidelines are strict, and a person can get a lot of negative information removed just by cleverly using the guidelines to challenge entries. This is no fault of Wikipedia -- in fact, I think the guidelines are good and useful -- but it is often instructive to read the history of a particular page.

I recently stumbled over a particularly amusing example of this, and feel compelled to write about it.

More than a twelve years ago, when BinDiff was brand-new and wingraph32.exe was still the graph visualization tool of choice, there was a controversy surrounding a product called "CherryOS" - which purported to be an Apple emulator. A student had raised the allegation that "CherryOS" had misappropriated source code from an open-source project called "PearPC" on his website, and the founder of the company selling CherryOS (somebody by the name of Arben Kryeziu) had threatened the student legally over this claim.

In order to help a good cause, we did a quick analysis of the code similarities between CherryOS and PearPC, and found that approximately half of the code in CherryOS was verbatim copy & paste from PearPC. We wrote a small report, provided it to the lawyer of the student under allegation, and the entire kerfuffle died down quickly. Wikipedia used to have a page that detailed some of the drama for a few years thereafter.

I recently stumbled over the Wikipedia page of CherryOS, and was impressed: The page had been cleaned of any information that supported the code-theft claims, and offered a narrative where there had never been conclusive consensus that CherryOS was full of misappropriated code. This is not a reflection of what happened back then at all.

Anyhow, in a twist of fate, I also found an old USB stick which still contained a draft of the 2005 note we wrote. For the sake of history, here it is :-)

I had forgotten how painful it was to look at disassembly CFGs in wingraph32. Sometimes, when I am frustrated at the speed at which RE tools improved during my professional life, it is useful to be reminded what the dark ages looked like.

August 14 2017


Compartmentation is hard, but the Big Data playbook makes it harder still

A new study of Palantir’s systems and business methods makes sobering reading for people interested in what big data means for privacy.

Privacy scales badly. It’s OK for the twenty staff at a medical practice to have access to the records of the ten thousand patients registered there, but when you build a centralised system that lets every doctor and nurse in the country see every patient’s record, things go wrong. There are even sharper concerns in the world of intelligence, which agencies try to manage using compartmentation: really sensitive information is often put in a compartment that’s restricted to a handful of staff. But such systems are hard to build and maintain. Readers of my book chapter on the subject will recall that while US Naval Intelligence struggled to manage millions of compartments, the CIA let more of their staff see more stuff – whereupon Aldrich Ames betrayed their agents to the Russians.

After 9/11, the intelligence community moved towards the CIA model, in the hope that with fewer compartments they’d be better able to prevent future attacks. We predicted trouble, and Snowden duly came along. As for civilian agencies such as Britain’s NHS and police, no serious effort was made to protect personal privacy by compartmentation, with multiple consequences.

Palantir’s systems were developed to help the intelligence community link, fuse and visualise data from multiple sources, and are now sold to police forces too. It should surprise no-one to learn that they do not compartment information properly, whether within a single force or even between forces. The organised crime squad’s secret informants can thus become visible to traffic cops, and even to cops in other forces, with tragically predictable consequences. Fixing this is hard, as Palantir’s market advantage comes from network effects and the resulting scale. The more police forces they sign up the more data they have, and the larger they grow the more third-party databases they integrate, leaving private-sector competitors even further behind.

This much we could have predicted from first principles but the details of how Palantir operates, and what police forces dislike about it, are worth studying.

What might be the appropriate public-policy response? Well, the best analysis of competition policy in the presence of network effects is probably Lina Khan’s, and her analysis would suggest in this case that police intelligence should be a regulated utility. We should develop those capabilities that are actually needed, and the right place for them is the Police National Database. The public sector is better placed to commit the engineering effort to do compartmentation properly, both there and in other applications where it’s needed, such as the NHS. Good engineering is expensive – but as the Los Angeles Police Department found, engaging Palantir can be more expensive still.

August 02 2017


Cambridge2Cambridge 2017

Following on from various other similar events we organised over the past few years, last week we hosted our largest ethical hacking competition yet, Cambridge2Cambridge 2017, with over 100 students from some of the best universities in the US and UK working together over three days. Cambridge2Cambridge was founded jointly by MIT CSAIL (in Cambridge Massachusetts) and the University of Cambridge Computer Laboratory (in the original Cambridge) and was first run at MIT in 2016 as a competition involving only students from these two universities. This year it was hosted in Cambridge UK and we broadened the participation to many more universities in the two countries. We hope in the future to broaden participation to more countries as well.

Cambridge 2 Cambridge 2017 from Frank Stajano Explains on Vimeo.

We assigned the competitors to teams that were mixed in terms of both provenance and experience. Each team had competitors from US and UK, and no two people from the same university; and each team also mixed experienced and less experienced players, based on the qualifier scores. We did so to ensure that even those who only started learning about ethical hacking when they heard about this competition would have an equal chance of being in the team that wins the gold. We then also mixed provenance to ensure that, during these three days, students collaborated with people they didn’t already know.

Despite their different backgrounds, what the attendees had in common was that they were all pretty smart and had an interest in cyber security. It’s a safe bet that, ten or twenty years from now, a number of them will probably be Security Specialists, Licensed Ethical Hackers, Chief Security Officers, National Security Advisors or other high calibre security professionals. When their institution or country is under attack, they will be able to get in touch with the other smart people they met here in Cambridge in 2017, and they’ll be in a position to help each other. That’s why the defining feature of the event was collaboration, making new friends and having fun together. Unlike your standard one-day hacking contest, the ambitious three-day programme of C2C 2017 allowed for social activities including punting on the river Cam, pub crawling and a Harry Potter style gala dinner in Trinity College.

In between competition sessions we had a lively and inspirational “women in cyber” panel, another panel on “securing the future digital society”, one on “real world pentesting” and a careers advice session. On the second day we hosted several groups of bright teenagers who had been finalists in the national CyberFirst Girls Competition. We hope to inspire many more women to take up a career path that has so far been very male-dominated. More broadly, we wish to inspire many young kids, girls or boys, to engage in the thrilling challenge of unravelling how computers work (and how they fail to work) in a high-stakes mental chess game of adversarial attack and defense.

Our platinum sponsors Leidos and NCC Group endowed the competition with over £20,000 of cash prizes, awarded to the best 3 teams and the best 3 individuals. Besides the main attack-defense CTF, fought on the Leidos CyberNEXS cyber range, our other sponsors offered additional competitions, the results of which were combined to generate the overall teams and individual scores. Here is the leaderboard, showing how our contestants performed. Special congratulations to Bo Robert Xiao of Carnegie Mellon University who, besides winning first place in both team and individuals, also went on to win at DEF CON in team PPP a couple of days later.

We are grateful to our supporters, our sponsors, our panelists, our guests, our staff and, above all, our 110 competitors for making this event a success. It was particularly pleasing to see several students who had already taken part in some of our previous competitions (special mention for Luke Granger-Brown from Imperial who earned medals at every visit). Chase Lucas from Dakota State University, having passed the qualifier but not having picked in the initial random selection, was on the reserve list in case we got funding to fly additional students; he then promptly offered to pay for his own airfare in order to be able to attend! Inter-ACE 2017 winner Io Swift Wolf from Southampton deserted her own graduation ceremony in order to participate in C2C (!), and then donated precious time during the competition to the CyberFirst girls who listened to her rapturously. Accumulating all that good karma could not go unrewarded, and indeed you can once again find her name in the leaderboard above. And I’ve only singled out a few, out of many amazing, dynamic and enthusiastic young people. Watch out for them: they are the ones who will defend the future digital society, including you and your family, from the cyber attacks we keep reading about in the media. We need many more like them, and we need to put them in touch with each other. The bad guys are organised, so we have to be organised too.

The event was covered by Sky News, ITV, BBC World Service and a variety of other media, which the official website and twitter page will undoubtedly collect in due course.

July 21 2017


AlphaBay and Hansa Market takedowns

Yesterday the FBI announced the takedown of the AlphaBay marketplace, a hidden service facilitating the sale of drugs, as well as other illicit products and services. The takedown had actually occurred weeks earlier, and had been staged to appear like an exit scam, where the operators take off with the money.

What was particularly interesting about the FBI’s takedown was that it was coordinated with the activities of the Dutch police, who had previously taken over the Hansa Market, another leading blackmarket. As the investigators were then controlling this marketplace they were able to monitor the activities of traders who had been using AlphaBay and then moved to Hansa Market.

I’ve been interested in online blackmarkets for some time, particularly those that relate to the stolen data economy. In fact, last year a paper written by Professor Thomas Holt and I was published. This paper outlines a number of intervention approaches, including disrupting the actual marketplaces where trade takes place.

Among our numerous suggestions are three that have been used, in combination, by this international police effort. We suggest that law enforcement promote distrust, which they did by making AlphaBay appear to have been an exit scam. We also suggest that law enforcement take over and take down marketplaces. Neither of these police approaches are new, and we point to previous examples where this has happened. In our conclusion, we stated:

Multiple interventions coordinated across different guardians, nationally and internationally, incorporating different bodies (investigative, regulatory, strategic, non-government organisations and the private sector) that have ownership of the crime prevention problem may reduce duplication of effort, as well as provide a more systematic approach with the greatest disruption effect.

The Hansa Market and AlphaBay approach demonstrates how this can be achieved. By co-ordinating the approaches, and working together, the disruptive effects of their work is likely to be much greater than if they had acted alone. It’s likely we’ll see arrests of traders and further disruption to the online drug trade.

Work by Soska and Christin found that after the Silk Road takedown, more online blackmarkets emerged and evolved. I think this evolution will continue, but perhaps marketplace administrators will have to work harder in order to earn the trust of their users.

July 12 2017


Testing the usability of offline mobile payments

Last September we spent some time in Nairobi figuring out whether we could make offline phone payments usable. Phone payments have greatly improved the lives of millions of poor people in countries like Kenya and Bangladesh, who previously didn’t have bank accounts at all but who can now send and receive money using their phones. That’s great for the 80% who have mobile phone coverage, but what about the others?

Last year I described how we designed and built a prototype system to support offline payments, with the help of a grant from the Bill and Melinda Gates Foundation, and took it to Africa to test it. Offline payments require both the sender and the receiver to enter some extra digits to ensure that the payer and the payee agree on who’s paying whom how much. We worked as hard as we could to minimise the number of digits and to integrate them into the familar transaction flow. Would this be good enough?

Our paper setting out the results was accepted to the Symposium on Usable Privacy and Security (SOUPS), the leading security usability event. This has now started and the paper’s online; the lead author, Khaled Baqer, will be presenting it tomorrow. As we noted last year, the DigiTally pilot was a success. For the data and the detailed analysis, please see our paper:

DigiTally: Piloting Offline Payments for Phones, Khaled Baqer, Ross Anderson, Jeunese Adrienne Payne, Lorna Mutegi, Joseph Sevilla, 13th Symposium on Usable Privacy & Security (SOUPS 2017), pp 131–143

July 10 2017


National Audit Office confirms that police, banks, Home Office pass the buck on fraud

The National Audit Office has found as follows:

“For too long, as a low value but high volume crime, online fraud has been overlooked by government, law enforcement and industry. It is now the most commonly experienced crime in England and Wales and demands an urgent response. While the Department is not solely responsible for reducing and preventing online fraud, it is the only body that can oversee the system and lead change. The launch of the Joint Fraud Taskforce in February 2016 was a positive step, but there is still much work to be done. At this stage it is hard to judge that the response to online fraud is proportionate, efficient or effective.”

Our regular readers will recall that over ten years ago the government got the banks to agree with the police that fraud would be reported to the bank first. This ensured that the police and the government could boast of falling fraud figures, while the banks could direct such fraud investigations as did happen. This was roundly criticized by the Science and Technology Committee (here and here) but the government held firm. Over the succeeding decade, dissident criminologists started pointing out that fraud was not falling, just going online like everything else, and the online stuff was being ignored. Successive governments just didn’t want to know; for most of the period in question the Home Secretary was one Theresa May, who so impressed her party by “cutting crime” even though she’d cut 20,000 police jobs that she got a promotion.

But pigeons come home to roost eventually, and over the last two years the Office of National Statistics has been moving to more honest crime figures. The NAO report bears close study by anyone interested in cybercrime, in crime generally, and in how politicians game the crime figures. It makes clear that the Home Office doesn’t know what’s going on (or doesn’t really want to) and hopes that other people (such as banks and the IT industry) will solve the problem.

Government has made one or two token gestures such as setting up Action Fraud, and the NAO piously hopes that the latest such (the Joint Fraud Taskforce) could be beefed up to do some good.

I’m afraid that the NAO’s recommendations are less impressive. Let me give an example. The main online fraud bothering Cambridge University relates to bogus accommodation; about fifty times a year, a new employee or research student turns up to find that the apartment they rented doesn’t exist. This is an organised scam, run by crooks in Germany, that affects students elsewhere in the UK (mostly in London) and is netting £5-10m a year. The cybercrime guy in the Cambridgeshire Constabulary can’t do anything about this as only the National Crime Agency in London is allowed to talk to the German police; but he can’t talk to the NCA directly. He has to go through the Regional Organised Crime Unit in Bedford, who don’t care. The NCA would rather do sexier stuff; they seem to have planned to take over the Serious Fraud Office, as that was in the Conservative manifesto for this year’s election.

Every time we look at why some scam persists, it’s down to the institutional economics – to the way that government and the police forces have arranged their targets, their responsibilities and their reporting lines so as to make problems into somebody else’s problems. The same applies in the private sector; if you complain about fraud on your bank account the bank may simply reply that as their systems are secure, it’s your fault. If they record it at all, it may be as a fraud you attempted to commit against them. And it’s remarkable how high a proportion of people prosecuted under the Computer Misuse Act appear to have annoyed authority, for example by hacking police websites. Why do we civilians not get protected with this level of enthusiasm?

Many people have lobbied for change; LBT readers will recall numerous articles over the last ten years. Which? made a supercomplaint to the Payment Services Regulator, and got the usual bland non-reassurance. Other members of the old establishment were less courteous; the Commissioner of the Met said that fraud was the victims’ fault and GCHQ agreed. Such attitudes hit the poor and minorities the hardest.

The NAO is just as reluctant to engage. At p34 it says of the Home Office “The Department … has to influence partners to take responsibility in the absence of more formal legal or contractual levers.” But we already have the Payment Services Regulations; the FCA explained in response to the Tesco Bank hack that the banks it regulates should make fraud victims good. And it has always been the common-law position that in the absence of gross negligence a banker could not debit his customer’s account without the customer’s mandate. What’s lacking is enforcement. Nobody, from the Home Office through the FCA to the NAO, seems to want to face down the banks. Rather than insisting that they obey the law, the Home Office will spend another £500,000 on a publicity campaign, no doubt to tell us that it’s all our fault really.

June 26 2017


WEIS 2017 – liveblog

I’m at the sixteenth workshop on the economics of information security at UCSD. I’ll be liveblogging the sessions in followups to this post.

June 16 2017


Regulatory capture

Today’s newspapers report that the cladding on the Grenfell Tower, which appears to have been a major factor in the dreadful loss of life there, was banned in Germany and permitted in America only for low-rise buildings. It would have cost only £2 more per square meter to use fire-resistant cladding instead.

The tactical way of looking at this is whether the landlords or the builders were negligent, or even guilty of manslaughter, for taking such a risk in order to save £5000 on an £8m renovation job. The strategic approach is to ask why British regulators are so easily bullied by the industries they are supposed to police. There is a whole literature on regulatory capture but Britain seems particularly prone to it.

Regular readers of this blog will recall many cases of British regulators providing the appearance of safety, privacy and security rather than the reality. The Information Commissioner is supposed to regulate privacy but backs away from confronting powerful interests such as the tabloid press or the Department of Health. The Financial Ombudsman Service is supposed to protect customers but mostly sides with the banks instead; the new Payment Systems Regulator seems no better. The MHRA is supposed to regulate the safety of medical devices, yet resists doing anything about infusion pumps, which kill as many people as cars do.

Attempts to fix individual regulators are frustrated by lobbyists, or even by fear of lobbyists. For example, my colleague Harold Thimbleby has done great work on documenting the hazards of infusion pumps; yet when he applied to be a non-executive director of the MHRA he was not even shortlisted. I asked a civil servant who was once responsible for recommending such appointments to the Secretary of State why ministers never seemed to appoint people like Harold who might make a real difference. He replied wearily that ministers would never dream of that as “the drug companies would make too much of a fuss”.

In the wake of this tragedy there are both tactical and strategic questions of blame. Tactically, who decided that it was OK to use flammable cladding on high-rise buildings, when other countries came to a different conclusion? Should organisations be fined, should people be fired, and should anyone go to prison? That’s now a matter for the public inquiry, the police and the courts.

Strategically, why is British regulators so cosy with the industries they regulate, and what can be done about that? My starting point is that the appointment of regulators should no longer be in the gift of ministers. I propose that regulatory appointments be moved from the Cabinet Office to an independent commission, like the Judicial Appointments Commission, but with a statutory duty to hire the people most likely to challenge groupthink and keep the regulator effective. That is a political matter – a matter for all of us.

June 14 2017


Camouflage or scary monsters: deceiving others about risk

I have just been at the Cambridge Risk and Uncertainty Conference which brings together people who educate the public about risks. They include public-health doctors trying to get people to eat better and exercise more, statisticians trying to keep governments honest about crime statistics, and climatologists trying to educate us about global warming – an eclectic and interesting bunch.

Most of the people in this community see their role as dispelling ignorance, or motivating the slothful. Yet in most of the cases we discussed, the public get risk wrong because powerful interests make a serious effort to scare them about some of life’s little hazards, or to reassure them about others. When this is put to the risk communication folks in a question – whether after a talk or in the corridor – they readily admit they’re up against a torrent of misleading marketing. But they don’t see what they’re doing as adversarial, and I strongly suspect that many risk interventions are less effective as a result.

In my talk (slides) I set this out as simply and starkly as I could. We spend too much on terrorism, because both the terrorists and the governments who’re supposed to protect us from them big up the threat; we spend too little on cybercrime, because everyone from the crooks through the police and the banks to the computer industry has their own reason to talk down the threat. I mentioned recent cases such as Wannacry as examples of how institutions communicate risk in self-serving, misleading ways. I discussed our own study of browser warnings, which suggests that people at least subconsciously know that most of the warnings they see are written to benefit others rather than them; they tune out all but the most specific.

What struck me with some force when preparing my talk, though, is that there’s just nobody in academia who takes a holistic view of adversarial risk communication. Many people look at some small part of the problem, from David Rios’ game-theoretic analysis of adversarial risk through John Mueller’s studies of terrorism risk and Alessandro Acquisti’s behavioural economics of privacy, through to criminologists who study pathways into crime and psychologists who study deception. Of all these, the literature on deception might be the most relevant, though we should also look at politics, propaganda, and studies of why people stubbornly persist in their beliefs – including the excellent work by Bénabou and Tirole on the value people place on belief. Perhaps the professionals whose job comes closest to adversarial risk communication are political spin doctors. So when should we talk about new facts, and when should we talk about who’s deceiving you and why?

Given the current concern over populism and the role of social media in the Brexit and Trump votes, it might be time for a more careful cross-disciplinary study of how we can change people’s minds about risk in the presence of smart and persistent adversaries. We know, for example, that a college education makes people much less susceptible to propaganda and marketing; but what is the science behind designing interventions that are quicker and cheaper in specific circumstances?


News about the x64 edition

Sorry for the long silence since IDA v6.95, we all were incredibly busy with the transition to the 64-bit version. We are happy to say now that we are close to the finish line and will announce the beta test soon.

Transition to x64 itself was not that hard. We have been compiling IDA in x64 mode since many years, so making it actually work was a piece of cake.

It was much more time consuming to clean up the API: make it more logical, easier to use, and even remove some obsolete stuff that we have been carrying around since ages. Switching to the x64 is the unique opportunity for this cleanup because there are no existing x64 plugins yet and we won’t be breaking any working plugins. We hope that you will like the new API much more.

Just to give you an idea: we made more than 8000 commits to our source code repository and just the commit descriptions are more than 1MB. We reached 1.6M lines of source code without taking into account any third party or auto-generated files. I haven’t counted how many lines had changed since v6.95, but it can easily be that every second line got modified during the transition. In short, it was a huge undertaking.

While it was huge, we did not manage to make everything ideal (is it possible at all?…) We will continue to work on the API in the future.

Naturally, we were also busy fixing our past bugs (309 bugs since the public release of v6.95, to be exact; fortunately almost all of them reveal themselves in very specific circumstances).

We were also working on new features. Just to give you an idea of the new stuff, see very short descriptions below.

First of all, IDA fully switched to UTF-8. It will be possible to use Unicode strings everywhere, and specify any specific encoding for a string in the input file. The databases will be kept in UTF-8 as well, which will allow us to get rid of inconsistencies between Windows and Unix versions of IDA. In fact it is deeper that a simple support of UTF-8, we have a new system for international character support but it deserves a separate blog entry. We will talk about it in more detail after the release.

We will release the PowerPC 64-bit decompiler along with IDA v7. This assembler code (which did not fit into the screenshot):

Gets converted into one line:

We added support for exception handling. IDA recognizes try/except blocks and neatly comments them in the listing:

The iOS debugger improves support for debugging dylibs from dyld_shared_cache and adds support for source code level debugging.

We have tons of other improvements: better GDBServer support, updated FLAIR signatures, improved decompiler heuristics, updated built-in functions for IDAPython and IDC, new switch table patterns, etc.

Stay tuned, we will announce the beta test soon!

June 08 2017


Second Annual Cybercrime Conference

The Cambridge Cybercrime Centre is organising another one day conference on cybercrime on Thursday, 13th July 2017.

In future years we intend to focus on research that has been carried out using datasets provided by the Cybercrime Centre, but just as last year (details here, liveblog here) we have a stellar group of invited speakers who are at the forefront of their fields:

They will present various aspects of cybercrime from the point of view of criminology, policy, security economics, law and policing.

This one day event, to be held in the Faculty of Law, University of Cambridge will follow immediately after (and will be in the same venue as) the “Tenth International Conference on Evidence Based Policing” organised by the Institute of Criminology which runs on the 11th and 12th July 2016.

Full details (and information about booking) is here.

June 01 2017


When safety and security become one

What happens when your car starts getting monthly upgrades like your phone and your laptop? It’s starting to happen, and the changes will be profound. We’ll be able to improve car safety as we learn from accidents, and fixing a flaw won’t mean spending billions on a recall. But if you’re writing navigation code today that will go in the 2020 Landrover, how will you be able to ship safety and security patches in 2030? In 2040? In 2050? At present we struggle to keep software patched for three years; we have no idea how to do it for 30.

Our latest paper reports a project that Éireann Leverett, Richard Clayton and I undertook for the European Commission into what happens to safety in this brave new world. Europe is the world’s lead safety regulator for about a dozen industry sectors, of which we studied three: road transport, medical devices and the electricity industry.

Up till now, we’ve known how to make two kinds of fairly secure system. There’s the software in your phone or laptop which is complex and exposed to online attack, so has to be patched regularly as vulnerabilities are discovered. It’s typically abandoned after a few years as patching too many versions of software costs too much. The other kind is the software in safety-critical machinery which has tended to be stable, simple and thoroughly tested, and not exposed to the big bad Internet. As these two worlds collide, there will be some rather large waves.

Regulators who only thought in terms of safety will have to start thinking of security too. Safety engineers will have to learn adversarial thinking. Security engineers will have to think much more about ease of safe use. Educators will have to start teaching these subjects together. (I just expanded my introductory course on software engineering into one on software and security engineering.) And the policy debate will change too; people might vote for the FBI to have a golden master key to unlock your iPhone and read your private messages, but they might be less likely to vote them a master key to take over your car or your pacemaker.

Researchers and software developers will have to think seriously about how we can keep on patching the software in durable goods such as vehicles for thirty or forty years. It’s not acceptable to recycle cars after seven years, as greedy carmakers might hope; the embedded carbon cost of a car is about equal to its lifetime fuel burn, and reducing average mileage from 200,000 to 70,000 would treble the car industry’s CO2 emissions. So we’re going to have to learn how to make software sustainable. How do we do that?

Our paper is here; there’s a short video here and a longer video here.

May 25 2017


Security and Human Behaviour 2017

I’m liveblogging the Workshop on Security and Human Behaviour which is being held here in Cambridge. The programme is here. For background, see the liveblogs for SHB 2008-15 which are linked here. Blog posts summarising the talks at the workshop sessions will appear as followups below.

May 23 2017


RIP smart meters

The Telegraph has just run an op-ed they asked me to write over the weekend, after I pointed out here on Friday that the Conservative manifesto had quietly downgraded the smart meter programme to a voluntary one.

Regular readers of Light Blue Touchpaper will have followed the smart meter story for almost a decade, back through the dishonest impact assessment to the fact that they pose a threat to critical infrastructure.

May 19 2017


Manifestos and tech

The papers went to town yesterday on the Conservative manifesto but missed some interesting bits.

First, no-one seems to have noticed that the smart meter programme is being quietly put to death. We read on page 60 that everyone will be offered a smart meter by 2020. So a mandatory national programme has become voluntary, just like that. Regular readers of this blog will recall that the programme was sold in 2008 by Ed Milliband using a dishonest impact assessment, yet all the parties backed it after 2010, leaving no-one to point out that it was going to cost us all a fortune and never save any carbon. May says she wants to reduce energy costs; this was surely a no-brainer.

That was the good news for England. The good news for friends in rural Scotland is high-speed broadband for all by 2020. But there are some rather weird things in there too.

What on earth is “the right of businesses to insist on a digital signature”? Digital signatures are very 1998, and we already have the electronic signature directive. From whom will businesses be able to insist on a signature, and if I’m one of the legislated victims, how much do I have to pay to buy the apparatus?

All digital businesses will have “to support new digital proofs of identification”. That presumably means forcing firms to use Verify, a dysfunctional online authentication service whose roots lie in Blair’s obsession with identity. If a newspaper currently identifies its subscribers via a proprietary logon, will they have to offer Verify as an option? Will it have to be the only option, displacing Facebook and Twitter? The manifesto also says that local government will have to use Verify; and elsewhere that councils must publish planning applications and bus routes “without the hassle and delay that currently exists.” OK, so some councils could so with more competent webmasters, but don’t worry: “hundreds of leaders from the world of tech can come into government to help deliver better public services.”

The Land Registry, the Ordnance Survey and other quangos that do geography (our leader’s degree subject) will all band together to create the largest open repository of land data in the world. So where will the Ordnance Survey get its money from then? That small question killed the same idea in 2010 after Tim Berners-Lee sold it to Cameron.

There will be a levy on social media companies, like on gambling companies, to support awareness and preventive activity. And they must not direct users, even unintentionally, to hate speech. So will Facebook be fined whenever they let users like a xenophobic article in the Daily Mail?

No doubt in view of the delicacy of such regulatory decisions, Leveson II is killed; there will be a Data Use and Ethics Commission instead. It will advise regulators and develop the principles and rules that will give people confidence their data are being handled properly. Wow. We now have the Charter of Fundamental Rights to give us principles, the GDPR to give us rules, and the ECJ to hammer out the case law. Now the People don’t have confidence in such experts we’re going to let the Prime Minister of the day appoint a different lot.

The next government will further strengthen cyber security standards for government and public services, so presumably all such services will have to use expensive networks such as the NHS-wide network from BT which will expect them to manage their own firewalls without telling them how to.

But don’t worry. It will become “as difficult to commit a crime digitally as it is physically”. There is text about working “with international law enforcement agencies to ensure perpetrators are brought to justice” but our local police force isn’t allowed to do anything effective about online accommodation fraud committed by a gang in Germany. They have to work through the NCA – who don’t care. The manifesto signals more of the same: the NCA will get to eat the SFO, which does crimes over £100m, leaving them even less interested in online crooks who steal a thousand pounds of deposit from dozens of students a year.

In fact there is no signal anywhere in the manifesto that May understands the impact of volume cybercrime, even though it’s now most of the property crime in the UK. She rather prefers to boast of the falling crime over the past seven years, as if it were her achievement as Home Secretary. The simple fact is that crime has been going online like everything else, and until 2015 the online part of it wasn’t recorded properly. This was not the doing of Theresa May, but of Margaret Hodge.

The manifesto rather seems to have been drafted in a geek-free room. And let’s not spoil the party by mentioning the impact that tight immigration targets will have on the IT industry, or for that matter on higher education. Perhaps they want us to hope that they don’t really mean that part of it, but perhaps we’d better make a plan to open a campus in India or Canada, just in case.

May 16 2017


A Personal Note and a Sincere Thank You

It’s been a while since we informed you about Felix’ state of health and an update is long overdue. Thank you all for your patience! Since the day of the last post, there have been an overwhelming number of messages with wishes for his well-being and speedy recovery; some of them long and with very personal notes, some short but in no way less important and appreciated. Most of them have been forwarded to Felix, who truly took and still is taking a lot of strength from your messages. However, shortly after the blog post, very unfortunately, Felix experienced some setbacks in his recovery process; incidents, which have been deeply concerning but could thankfully medically be treated with a positive outcome. So, finally, we are very much relieved to be able to say that his condition has and is constantly improving since then, at an ever accelerating rate and with no further complications. Felix is already enjoying little tours and walks in surrounding woods of his rehabilitation home and does not miss any opportunity to engage in ever growing conversations and discussions, proving that he has not lost his sense of humour and logic thinking, most of you know and love, and which so much defines his character. Felix has personally asked to include the following message into this post, which we would like quote unaltered …
Dear friends,

I would like to thank all friends and colleagues, all who have sent
well-wishes to me. Your wishes have been well received and contribute to
the strength needed for my recovery process, providing great support for
my daily routine in rehab. I’m looking at the print-outs every day and
greatly appreciate them.

Thank you very much!

With no less gratitude, also his family wants to express their profound and heartfelt thanks for all the sympathy and solidarity received. Most of the people here at Recurity Labs have made visits fairly recently and we will begin to step out of the way for other friends to start visiting him. Please be aware that visiting opportunities are still very limited since his therapists want to fully exploit the momentum his recovery now has, so their judgement is fully trusted when controlling and planning his schedule. We are in full hope that this process will continue and accelerate even more in the future. Without a doubt, you have largely contributed to the recovery process, so thank you all again for being such well-wishing friends!
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!