Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

November 07 2017

19:04

Ethical issues in research using datasets of illicit origin

On Friday at IMC I presented our paper “Ethical issues in research using datasets of illicit origin” by Daniel R. Thomas, Sergio Pastrana, Alice Hutchings, Richard Clayton, and Alastair R. Beresford. We conducted this research after thinking about some of these issues in the context of our previous work on UDP reflection DDoS attacks.

Data of illicit origin is data obtained by illicit means such as exploiting a vulnerability or unauthorized disclosure, in our previous work this was leaked databases from booter services. We analysed existing guidance on ethics and papers that used data of illicit origin to see what issues researchers are encouraged to discuss and what issues they did discuss. We find wide variation in current practice. We encourage researchers using data of illicit origin to include an ethics section in their paper: to explain why the work was ethical so that the research community can learn from the work. At present in many cases positive benefits as well as potential harms of research, remain entirely unidentified. Few papers record explicit Research Ethics Board (REB) (aka IRB/Ethics Commitee) approval for the activity that is described and the justifications given for exemption from REB approval suggest deficiencies in the REB process. It is also important to focus on the “human participants” of research rather than the narrower “human subjects” definition as not all the humans that might be harmed by research are its direct subjects.

The paper and the slides are available.

November 01 2017

09:48

Internet Measurement Conference

I’m at IMC 2017 at Queen Mary University of London, and will try to liveblog a number of the sessions that are relevant to security in followups to this post.

October 30 2017

09:01

IDA and common Python issues

With IDA 7.0 switching fully to native x64 architecture, we also switched to the x64 Python which brought some new issues but also exposed some we’ve seen before. This post tries to summarize the most common issues we’ve seen our users encounter as well as suggestions how to fix them or at least diagnose where it went wrong before you have to contact support.

Common errors

  1. The specified module could not be found

    You may see such messages in IDA’s Output window:

    LoadLibrary(C:\Program Files\IDA\plugins\python.dll) error: The specified module could not be found.
    C:\Program Files\IDA\plugins\python.dll: can't load file

    …while the file is obviously there.

    This message (it comes from the OS) is a little misleading: it actually means that one of the modules that python.dll (IDAPython plugin) links to could not be found.
    The most common one is python27.dll (python runtime) which is usually installed in %windir%\system32 but may be in a different place for whatever reason (e.g. you’re using an alternative Python distribution or user-specific Python installation). You may need to check your PATH environment variable or possibly reinstall Python (x64 version).

  2. %1 is not a valid Win32 application

    Typical output:
    LoadLibrary(C:\Program Files\IDA\plugins\python.dll) error: %1 is not a valid Win32 application.
    C:\Program Files\IDA\plugins\python.dll: can't load file

    or:

    ImportError: DLL load failed: %1 is not a valid Win32 application

    This error can happen if IDA tries to load a DLL with wrong architecture (e.g. x86 DLL into x64 IDA or vice versa). Common causes:

    • wrong variant of python27.dll present in PATH
    • wrong versions of native modules (.pyd) are being used (e.g. you have PYTHONPATH or PYTHONHOME environment variable set pointing to the x86 install)
    • installing x86 and x64 Python into the same directory (this will never work).
  3. IDAPython: importing “site” failed

    This issue is usually caused by presence of non-standard python27.dll in the PATH which uses its own set of modules (you should edit PATH in this case). However, it may happen if your Python installation is broken in some way. Reinstalling Python manually may fix it.

Diagnosis checklist

  1. Check that you have one and only one python27.dll in PATH. you can check it by executing “where python27.dll” in a command prompt. Expected output:

    c:\>where python27.dll
    C:\Windows\System32\python27.dll

  2. Check where Python is looking for modules. Check this registry key:

    HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.7\InstallPath

    expected value is C:\Python27-x64\

  3. check your environment (type “set” in command prompt) for any variables starting with PYTHON, especially PYTHONHOME or PYTHONPATH and fix or remove them. If you need these variables for other software, we suggest making a .bat file to clear these variables and then start IDA.
  4. if IDAPython loads but you get import errors when running scripts, dump sys.path and check for any unexpected/wrong entries.
    Example good output:

    Python>import sys
    Python>sys.path
    ['C:\\Windows\\system32\\python27.zip', 'C:\\Python27-x64\\Lib', 'C:\\Python27-x64\\DLLs', 'C:\\Python27-x64\\Lib\\lib-tk', 'C:\\Program Files\\IDA 7.0\\python', 'C:\\Python27-x64', 'C:\\Python27-x64\\lib\\site-packages', 'C:\\Program Files\\IDA 7.0\\python\\lib\\python2.7\\lib-dynload\\ida_32', 'C:\\Program Files\\IDA 7.0\\python']

  5. trace from which paths IDAPython is loading modules. You can do it by setting environment variable PYTHONVERBOSE=1 before running IDA. Paths will be printed into Output Window (you can also save it to file by adding -L<logfile> to IDA’s command line).

Reinstalling Python/IDA

In case you decide to reinstall Python and/or IDA, do not just remove their directories as this may leave remains of installation in registry or elsewhere and mess up future installs. Use the corresponding uninstallers. Note that IDA’s uninstaller does not uninstall Python so that needs to be done separately if required.
Normally IDA 7.0 installer installs x64 Python if it’s not already installed but you can also download a 2.7 installer from python.org (pick the “Windows x86-64 MSI installer”). You can install it into any location of your choice as long as IDA can find it (python27.dll should be in PATH), however we recommend using C:\Python27-x64 (“All Users” option) so it does not conflict with the 32-bit install.
Before uninstalling IDA, check that you still have the original installer. If necessary, you can request a new download (only possible with active support).

October 13 2017

14:23

Security economics MOOC running once more

Colleagues and I created a massively open online course in the economics of information security, which ran in 2015 and again in 2016.

I’m pleased to announce that it’s now running again until December 30th as a self-paced course. Registration is open here.

September 19 2017

16:54

IDA 7.0: Qt 5.6.0 configure options & patch

A handful of our users have already requested information regarding the Qt 5.6.0 build, that is shipped with IDA 7.0.

Configure options

Here are the options that were used to build the libraries on:

  • Windows: ...\5.6.0\configure.bat "-nomake" "tests" "-qtnamespace" "QT"
    "-confirm-license" "-accessibility" "-opensource" "-force-debug-info" "-platfor
    " "win32-msvc2015" "-opengl" "desktop" "-prefix" "C:/Qt/5.6.0-x64"
    • Note that you will have to build with Visual Studio 2015, to obtain compatible libs
  • Linux: .../5.6.0/configure "-nomake" "tests" "-qtnamespace" "QT" "-confirm-license" "-accessibility" "-opensource" "-force-debug-info" "-platform" "linux-g++-64" "-developer-build" "-fontconfig" "-qt-freetype" "-qt-libpng" "-glib" "-qt-xcb" "-dbus" "-qt-sql-sqlite" "-gtkstyle" "-prefix" "/usr/local/Qt/5.6.0-x64"
  • Mac OSX: .../5.6.0/configure "-nomake" "tests" "-qtnamespace" "QT" "-confirm-license" "-accessibility" "-opensource" "-force-debug-info" "-platform" "macx-g++" "-debug-and-release" "-fontconfig" "-qt-freetype" "-qt-libpng" "-qt-sql-sqlite" "-prefix" "/Users/Shared/Qt/5.6.0-x64"

patch

In addition to the specific configure options, the Qt build that ships with IDA includes the following patch. You should therefore apply it to your own Qt 5.6.0 sources before compiling, in order to obtain similar binaries.

Note that this patch should work without any modification, against the 5.6.0 release as found there. You may have to fiddle with it, if your Qt 5.6.0 sources come from somewhere else.

September 10 2017

18:09

Is this research ethical?

The Economist features face recognition on its front page, reporting that deep neural networks can now tell whether you’re straight or gay better than humans can just by looking at your face. The research they cite is a preprint, available here.

Its authors Kosinski and Wang downloaded thousands of photos from a dating site, ran them through a standard feature-extraction program, then classified gay vs straight using a standard statistical classifier, which they found could tell the men seeking men from the men seeking women. My students pretty well instantly called this out as selection bias; if gay men consider boyish faces to be cuter, then they will upload their most boyish photo. The paper authors suggest their finding may support a theory that sexuality is influenced by fetal testosterone levels, but when you don’t control for such biases your results may say more about social norms than about phenotypes.

Quite apart from the scientific value of the research, which is perhaps best assessed by specialists, I’m concerned with the ethics and privacy aspects. I am surprised that the paper doesn’t report having been through ethical review; the authors consider that photos on a dating website are public information and appear to assume that privacy issues simply do not arise.

Yet UK courts decided, in Campbell v Mirror, that privacy could be violated even by photos taken on the public street, and European courts have come to similar conclusions in I v Finland and elsewhere. For example, a Catholic woman is entitled to object to the use of her medical record in research on abortifacients and contraceptives even if the proposed use is fully anonymised and presents no privacy risk whatsoever. The dating site users would be similarly entitled to object to their photos being used in research to which they might have an ethical objection, even if they could not be identified from their photos. There are surely going to be people who object to research in any nature vs nurture debate, especially on a charged topic such as sexuality. And the whole point of the Economist’s coverage is that face-recognition technology is now good enough to work at population scale.

What do LBT readers think?

August 26 2017

09:26

Is the City force corrupt, or just clueless?

This week brought an announcement from a banking association that “identity fraud” is soaring to new levels, with 89,000 cases reported in the first six months of 2017 and 56% of all fraud reported by its members now classed as “identity fraud”.

So what is “identity fraud”? The announcement helpfully clarifies the concept:

“The vast majority of identity fraud happens when a fraudster pretends to be an innocent individual to buy a product or take out a loan in their name. Often victims do not even realise that they have been targeted until a bill arrives for something they did not buy or they experience problems with their credit rating. To carry out this kind of fraud successfully, fraudsters need access to their victim’s personal information such as name, date of birth, address, their bank and who they hold accounts with. Fraudsters get hold of this in a variety of ways, from stealing mail through to hacking; obtaining data on the ‘dark web’; exploiting personal information on social media, or though ‘social engineering’ where innocent parties are persuaded to give up personal information to someone pretending to be from their bank, the police or a trusted retailer.”

Now back when I worked in banking, if someone went to Barclays, pretended to be me, borrowed £10,000 and legged it, that was “impersonation”, and it was the bank’s money that had been stolen, not my identity. How did things change?

The members of this association are banks and credit card issuers. In their narrative, those impersonated are treated as targets, when the targets are actually those banks on whom the impersonation is practised. This is a precursor to refusing bank customers a “remedy” for “their loss” because “they failed to protect themselves.”
Now “dishonestly making a false representation” is an offence under s2 Fraud Act 2006. Yet what is the police response?

The Head of the City of London Police’s Economic Crime Directorate does not see the banks’ narrative as dishonest. Instead he goes along with it: “It has become normal for people to publish personal details about themselves on social media and on other online platforms which makes it easier than ever for a fraudster to steal someone’s identity.” He continues: “Be careful who you give your information to, always consider whether it is necessary to part with those details.” This is reinforced with a link to a police website with supposedly scary statistics: 55% of people use open public wifi and 40% of people don’t have antivirus software (like many security researchers, I’m guilty on both counts). This police website has a quote from the Head’s own boss, a Commander who is the National Police Coordinator for Economic Crime.

How are we to rate their conduct? Given that the costs of the City force’s Dedicated Card and Payment Crime Unit are borne by the banks, perhaps they feel obliged to sing from the banks’ hymn sheet. Just as the MacPherson report criticised the Met for being institutionally racist, we might perhaps describe the City force as institutionally corrupt. There is a wide literature on regulatory capture, and many other examples of regulators keen to do the banks’ bidding. And it’s not just the City force. There are disgraceful examples of the Metropolitan Police Commissioner and GCHQ endorsing the banks’ false narrative. However people are starting to notice, including the National Audit Office.

Or perhaps the police are just clueless?

August 22 2017

17:26

History of the Crypto Wars in Britain

Back in March I gave an invited talk to the Cambridge University Ethics in Mathematics Society on the Crypto Wars. They have just put the video online here.

We spent much of the 1990s pushing back against attempts by the intelligence agencies to seize control of cryptography. From the Clipper Chip through the regulation of trusted third parties to export control, the agencies tried one trick after another to make us all less secure online, claiming that thanks to cryptography the world of intelligence was “going dark”. Quite the opposite was true; with communications moving online, with people starting to carry mobile phones everywhere, and with our communications and traffic data mostly handled by big firms who respond to warrants, law enforcement has never had it so good. Twenty years ago it cost over a thousand pounds a day to follow a suspect around, and weeks of work to map his contacts; Ed Snowden told us how nowadays an officer can get your location history with one click and your address book with another. In fact, searches through the contact patterns of whole populations are now routine.

The checks and balances that we thought had been built in to the RIP Act in 2000 after all our lobbying during the 1990s turned out to be ineffective. GCHQ simply broke the law and, after Snowden exposed them, Parliament passed the IP Act to declare that what they did was all right now. The Act allows the Home Secretary to give secret orders to tech companies to do anything they physically can to facilitate surveillance, thereby delighting our foreign competitors. And Brexit means the government thinks it can ignore the European Court of Justice, which has already ruled against some of the Act’s provisions. (Or perhaps Theresa May chose a hard Brexit because she doesn’t want the pesky court in the way.)

Yet we now see the Home Secretary repeating the old nonsense about decent people not needing privacy along with law enforcement officials on both sides of the Atlantic. Why doesn’t she just sign the technical capability notices she deems necessary and serve them?

In these fraught times it might be useful to recall how we got here. My talk to the Ethics in Mathematics Society was a personal memoir; there are many links on my web page to relevant documents.

09:13

A quick post on Wikipedia-scrubbing and a historical document on binary diffing

I am a huge fan of Wikipedia -- I sometimes browse Wikipedia like other people watch TV, skipping from topic to topic and - on average - being impressed by the quality of the articles.

One thing I have noticed in recent years, though, is that the base-democratic principles of Wikipedia open it up to manipulation and whitewashing - Wikipedia's guidelines are strict, and a person can get a lot of negative information removed just by cleverly using the guidelines to challenge entries. This is no fault of Wikipedia -- in fact, I think the guidelines are good and useful -- but it is often instructive to read the history of a particular page.

I recently stumbled over a particularly amusing example of this, and feel compelled to write about it.

More than a twelve years ago, when BinDiff was brand-new and wingraph32.exe was still the graph visualization tool of choice, there was a controversy surrounding a product called "CherryOS" - which purported to be an Apple emulator. A student had raised the allegation that "CherryOS" had misappropriated source code from an open-source project called "PearPC" on his website, and the founder of the company selling CherryOS (somebody by the name of Arben Kryeziu) had threatened the student legally over this claim.

In order to help a good cause, we did a quick analysis of the code similarities between CherryOS and PearPC, and found that approximately half of the code in CherryOS was verbatim copy & paste from PearPC. We wrote a small report, provided it to the lawyer of the student under allegation, and the entire kerfuffle died down quickly. Wikipedia used to have a page that detailed some of the drama for a few years thereafter.

I recently stumbled over the Wikipedia page of CherryOS, and was impressed: The page had been cleaned of any information that supported the code-theft claims, and offered a narrative where there had never been conclusive consensus that CherryOS was full of misappropriated code. This is not a reflection of what happened back then at all.

Anyhow, in a twist of fate, I also found an old USB stick which still contained a draft of the 2005 note we wrote. For the sake of history, here it is :-)

I had forgotten how painful it was to look at disassembly CFGs in wingraph32. Sometimes, when I am frustrated at the speed at which RE tools improved during my professional life, it is useful to be reminded what the dark ages looked like.


August 14 2017

09:30

Compartmentation is hard, but the Big Data playbook makes it harder still

A new study of Palantir’s systems and business methods makes sobering reading for people interested in what big data means for privacy.

Privacy scales badly. It’s OK for the twenty staff at a medical practice to have access to the records of the ten thousand patients registered there, but when you build a centralised system that lets every doctor and nurse in the country see every patient’s record, things go wrong. There are even sharper concerns in the world of intelligence, which agencies try to manage using compartmentation: really sensitive information is often put in a compartment that’s restricted to a handful of staff. But such systems are hard to build and maintain. Readers of my book chapter on the subject will recall that while US Naval Intelligence struggled to manage millions of compartments, the CIA let more of their staff see more stuff – whereupon Aldrich Ames betrayed their agents to the Russians.

After 9/11, the intelligence community moved towards the CIA model, in the hope that with fewer compartments they’d be better able to prevent future attacks. We predicted trouble, and Snowden duly came along. As for civilian agencies such as Britain’s NHS and police, no serious effort was made to protect personal privacy by compartmentation, with multiple consequences.

Palantir’s systems were developed to help the intelligence community link, fuse and visualise data from multiple sources, and are now sold to police forces too. It should surprise no-one to learn that they do not compartment information properly, whether within a single force or even between forces. The organised crime squad’s secret informants can thus become visible to traffic cops, and even to cops in other forces, with tragically predictable consequences. Fixing this is hard, as Palantir’s market advantage comes from network effects and the resulting scale. The more police forces they sign up the more data they have, and the larger they grow the more third-party databases they integrate, leaving private-sector competitors even further behind.

This much we could have predicted from first principles but the details of how Palantir operates, and what police forces dislike about it, are worth studying.

What might be the appropriate public-policy response? Well, the best analysis of competition policy in the presence of network effects is probably Lina Khan’s, and her analysis would suggest in this case that police intelligence should be a regulated utility. We should develop those capabilities that are actually needed, and the right place for them is the Police National Database. The public sector is better placed to commit the engineering effort to do compartmentation properly, both there and in other applications where it’s needed, such as the NHS. Good engineering is expensive – but as the Los Angeles Police Department found, engaging Palantir can be more expensive still.

August 02 2017

22:54

Cambridge2Cambridge 2017

Following on from various other similar events we organised over the past few years, last week we hosted our largest ethical hacking competition yet, Cambridge2Cambridge 2017, with over 100 students from some of the best universities in the US and UK working together over three days. Cambridge2Cambridge was founded jointly by MIT CSAIL (in Cambridge Massachusetts) and the University of Cambridge Computer Laboratory (in the original Cambridge) and was first run at MIT in 2016 as a competition involving only students from these two universities. This year it was hosted in Cambridge UK and we broadened the participation to many more universities in the two countries. We hope in the future to broaden participation to more countries as well.

Cambridge 2 Cambridge 2017 from Frank Stajano Explains on Vimeo.

We assigned the competitors to teams that were mixed in terms of both provenance and experience. Each team had competitors from US and UK, and no two people from the same university; and each team also mixed experienced and less experienced players, based on the qualifier scores. We did so to ensure that even those who only started learning about ethical hacking when they heard about this competition would have an equal chance of being in the team that wins the gold. We then also mixed provenance to ensure that, during these three days, students collaborated with people they didn’t already know.

Despite their different backgrounds, what the attendees had in common was that they were all pretty smart and had an interest in cyber security. It’s a safe bet that, ten or twenty years from now, a number of them will probably be Security Specialists, Licensed Ethical Hackers, Chief Security Officers, National Security Advisors or other high calibre security professionals. When their institution or country is under attack, they will be able to get in touch with the other smart people they met here in Cambridge in 2017, and they’ll be in a position to help each other. That’s why the defining feature of the event was collaboration, making new friends and having fun together. Unlike your standard one-day hacking contest, the ambitious three-day programme of C2C 2017 allowed for social activities including punting on the river Cam, pub crawling and a Harry Potter style gala dinner in Trinity College.

In between competition sessions we had a lively and inspirational “women in cyber” panel, another panel on “securing the future digital society”, one on “real world pentesting” and a careers advice session. On the second day we hosted several groups of bright teenagers who had been finalists in the national CyberFirst Girls Competition. We hope to inspire many more women to take up a career path that has so far been very male-dominated. More broadly, we wish to inspire many young kids, girls or boys, to engage in the thrilling challenge of unravelling how computers work (and how they fail to work) in a high-stakes mental chess game of adversarial attack and defense.

Our platinum sponsors Leidos and NCC Group endowed the competition with over £20,000 of cash prizes, awarded to the best 3 teams and the best 3 individuals. Besides the main attack-defense CTF, fought on the Leidos CyberNEXS cyber range, our other sponsors offered additional competitions, the results of which were combined to generate the overall teams and individual scores. Here is the leaderboard, showing how our contestants performed. Special congratulations to Bo Robert Xiao of Carnegie Mellon University who, besides winning first place in both team and individuals, also went on to win at DEF CON in team PPP a couple of days later.

We are grateful to our supporters, our sponsors, our panelists, our guests, our staff and, above all, our 110 competitors for making this event a success. It was particularly pleasing to see several students who had already taken part in some of our previous competitions (special mention for Luke Granger-Brown from Imperial who earned medals at every visit). Chase Lucas from Dakota State University, having passed the qualifier but not having picked in the initial random selection, was on the reserve list in case we got funding to fly additional students; he then promptly offered to pay for his own airfare in order to be able to attend! Inter-ACE 2017 winner Io Swift Wolf from Southampton deserted her own graduation ceremony in order to participate in C2C (!), and then donated precious time during the competition to the CyberFirst girls who listened to her rapturously. Accumulating all that good karma could not go unrewarded, and indeed you can once again find her name in the leaderboard above. And I’ve only singled out a few, out of many amazing, dynamic and enthusiastic young people. Watch out for them: they are the ones who will defend the future digital society, including you and your family, from the cyber attacks we keep reading about in the media. We need many more like them, and we need to put them in touch with each other. The bad guys are organised, so we have to be organised too.

The event was covered by Sky News, ITV, BBC World Service and a variety of other media, which the official website and twitter page will undoubtedly collect in due course.

July 21 2017

10:57

AlphaBay and Hansa Market takedowns

Yesterday the FBI announced the takedown of the AlphaBay marketplace, a hidden service facilitating the sale of drugs, as well as other illicit products and services. The takedown had actually occurred weeks earlier, and had been staged to appear like an exit scam, where the operators take off with the money.

What was particularly interesting about the FBI’s takedown was that it was coordinated with the activities of the Dutch police, who had previously taken over the Hansa Market, another leading blackmarket. As the investigators were then controlling this marketplace they were able to monitor the activities of traders who had been using AlphaBay and then moved to Hansa Market.

I’ve been interested in online blackmarkets for some time, particularly those that relate to the stolen data economy. In fact, last year a paper written by Professor Thomas Holt and I was published. This paper outlines a number of intervention approaches, including disrupting the actual marketplaces where trade takes place.

Among our numerous suggestions are three that have been used, in combination, by this international police effort. We suggest that law enforcement promote distrust, which they did by making AlphaBay appear to have been an exit scam. We also suggest that law enforcement take over and take down marketplaces. Neither of these police approaches are new, and we point to previous examples where this has happened. In our conclusion, we stated:

Multiple interventions coordinated across different guardians, nationally and internationally, incorporating different bodies (investigative, regulatory, strategic, non-government organisations and the private sector) that have ownership of the crime prevention problem may reduce duplication of effort, as well as provide a more systematic approach with the greatest disruption effect.

The Hansa Market and AlphaBay approach demonstrates how this can be achieved. By co-ordinating the approaches, and working together, the disruptive effects of their work is likely to be much greater than if they had acted alone. It’s likely we’ll see arrests of traders and further disruption to the online drug trade.

Work by Soska and Christin found that after the Silk Road takedown, more online blackmarkets emerged and evolved. I think this evolution will continue, but perhaps marketplace administrators will have to work harder in order to earn the trust of their users.

July 12 2017

17:38

Testing the usability of offline mobile payments

Last September we spent some time in Nairobi figuring out whether we could make offline phone payments usable. Phone payments have greatly improved the lives of millions of poor people in countries like Kenya and Bangladesh, who previously didn’t have bank accounts at all but who can now send and receive money using their phones. That’s great for the 80% who have mobile phone coverage, but what about the others?

Last year I described how we designed and built a prototype system to support offline payments, with the help of a grant from the Bill and Melinda Gates Foundation, and took it to Africa to test it. Offline payments require both the sender and the receiver to enter some extra digits to ensure that the payer and the payee agree on who’s paying whom how much. We worked as hard as we could to minimise the number of digits and to integrate them into the familar transaction flow. Would this be good enough?

Our paper setting out the results was accepted to the Symposium on Usable Privacy and Security (SOUPS), the leading security usability event. This has now started and the paper’s online; the lead author, Khaled Baqer, will be presenting it tomorrow. As we noted last year, the DigiTally pilot was a success. For the data and the detailed analysis, please see our paper:

DigiTally: Piloting Offline Payments for Phones, Khaled Baqer, Ross Anderson, Jeunese Adrienne Payne, Lorna Mutegi, Joseph Sevilla, 13th Symposium on Usable Privacy & Security (SOUPS 2017), pp 131–143

July 10 2017

16:06

National Audit Office confirms that police, banks, Home Office pass the buck on fraud

The National Audit Office has found as follows:

“For too long, as a low value but high volume crime, online fraud has been overlooked by government, law enforcement and industry. It is now the most commonly experienced crime in England and Wales and demands an urgent response. While the Department is not solely responsible for reducing and preventing online fraud, it is the only body that can oversee the system and lead change. The launch of the Joint Fraud Taskforce in February 2016 was a positive step, but there is still much work to be done. At this stage it is hard to judge that the response to online fraud is proportionate, efficient or effective.”

Our regular readers will recall that over ten years ago the government got the banks to agree with the police that fraud would be reported to the bank first. This ensured that the police and the government could boast of falling fraud figures, while the banks could direct such fraud investigations as did happen. This was roundly criticized by the Science and Technology Committee (here and here) but the government held firm. Over the succeeding decade, dissident criminologists started pointing out that fraud was not falling, just going online like everything else, and the online stuff was being ignored. Successive governments just didn’t want to know; for most of the period in question the Home Secretary was one Theresa May, who so impressed her party by “cutting crime” even though she’d cut 20,000 police jobs that she got a promotion.

But pigeons come home to roost eventually, and over the last two years the Office of National Statistics has been moving to more honest crime figures. The NAO report bears close study by anyone interested in cybercrime, in crime generally, and in how politicians game the crime figures. It makes clear that the Home Office doesn’t know what’s going on (or doesn’t really want to) and hopes that other people (such as banks and the IT industry) will solve the problem.

Government has made one or two token gestures such as setting up Action Fraud, and the NAO piously hopes that the latest such (the Joint Fraud Taskforce) could be beefed up to do some good.

I’m afraid that the NAO’s recommendations are less impressive. Let me give an example. The main online fraud bothering Cambridge University relates to bogus accommodation; about fifty times a year, a new employee or research student turns up to find that the apartment they rented doesn’t exist. This is an organised scam, run by crooks in Germany, that affects students elsewhere in the UK (mostly in London) and is netting £5-10m a year. The cybercrime guy in the Cambridgeshire Constabulary can’t do anything about this as only the National Crime Agency in London is allowed to talk to the German police; but he can’t talk to the NCA directly. He has to go through the Regional Organised Crime Unit in Bedford, who don’t care. The NCA would rather do sexier stuff; they seem to have planned to take over the Serious Fraud Office, as that was in the Conservative manifesto for this year’s election.

Every time we look at why some scam persists, it’s down to the institutional economics – to the way that government and the police forces have arranged their targets, their responsibilities and their reporting lines so as to make problems into somebody else’s problems. The same applies in the private sector; if you complain about fraud on your bank account the bank may simply reply that as their systems are secure, it’s your fault. If they record it at all, it may be as a fraud you attempted to commit against them. And it’s remarkable how high a proportion of people prosecuted under the Computer Misuse Act appear to have annoyed authority, for example by hacking police websites. Why do we civilians not get protected with this level of enthusiasm?

Many people have lobbied for change; LBT readers will recall numerous articles over the last ten years. Which? made a supercomplaint to the Payment Services Regulator, and got the usual bland non-reassurance. Other members of the old establishment were less courteous; the Commissioner of the Met said that fraud was the victims’ fault and GCHQ agreed. Such attitudes hit the poor and minorities the hardest.

The NAO is just as reluctant to engage. At p34 it says of the Home Office “The Department … has to influence partners to take responsibility in the absence of more formal legal or contractual levers.” But we already have the Payment Services Regulations; the FCA explained in response to the Tesco Bank hack that the banks it regulates should make fraud victims good. And it has always been the common-law position that in the absence of gross negligence a banker could not debit his customer’s account without the customer’s mandate. What’s lacking is enforcement. Nobody, from the Home Office through the FCA to the NAO, seems to want to face down the banks. Rather than insisting that they obey the law, the Home Office will spend another £500,000 on a publicity campaign, no doubt to tell us that it’s all our fault really.

June 26 2017

17:55

WEIS 2017 – liveblog

I’m at the sixteenth workshop on the economics of information security at UCSD. I’ll be liveblogging the sessions in followups to this post.

June 16 2017

10:39

Regulatory capture

Today’s newspapers report that the cladding on the Grenfell Tower, which appears to have been a major factor in the dreadful loss of life there, was banned in Germany and permitted in America only for low-rise buildings. It would have cost only £2 more per square meter to use fire-resistant cladding instead.

The tactical way of looking at this is whether the landlords or the builders were negligent, or even guilty of manslaughter, for taking such a risk in order to save £5000 on an £8m renovation job. The strategic approach is to ask why British regulators are so easily bullied by the industries they are supposed to police. There is a whole literature on regulatory capture but Britain seems particularly prone to it.

Regular readers of this blog will recall many cases of British regulators providing the appearance of safety, privacy and security rather than the reality. The Information Commissioner is supposed to regulate privacy but backs away from confronting powerful interests such as the tabloid press or the Department of Health. The Financial Ombudsman Service is supposed to protect customers but mostly sides with the banks instead; the new Payment Systems Regulator seems no better. The MHRA is supposed to regulate the safety of medical devices, yet resists doing anything about infusion pumps, which kill as many people as cars do.

Attempts to fix individual regulators are frustrated by lobbyists, or even by fear of lobbyists. For example, my colleague Harold Thimbleby has done great work on documenting the hazards of infusion pumps; yet when he applied to be a non-executive director of the MHRA he was not even shortlisted. I asked a civil servant who was once responsible for recommending such appointments to the Secretary of State why ministers never seemed to appoint people like Harold who might make a real difference. He replied wearily that ministers would never dream of that as “the drug companies would make too much of a fuss”.

In the wake of this tragedy there are both tactical and strategic questions of blame. Tactically, who decided that it was OK to use flammable cladding on high-rise buildings, when other countries came to a different conclusion? Should organisations be fined, should people be fired, and should anyone go to prison? That’s now a matter for the public inquiry, the police and the courts.

Strategically, why is British regulators so cosy with the industries they regulate, and what can be done about that? My starting point is that the appointment of regulators should no longer be in the gift of ministers. I propose that regulatory appointments be moved from the Cabinet Office to an independent commission, like the Judicial Appointments Commission, but with a statutory duty to hire the people most likely to challenge groupthink and keep the regulator effective. That is a political matter – a matter for all of us.

June 14 2017

15:25

Camouflage or scary monsters: deceiving others about risk

I have just been at the Cambridge Risk and Uncertainty Conference which brings together people who educate the public about risks. They include public-health doctors trying to get people to eat better and exercise more, statisticians trying to keep governments honest about crime statistics, and climatologists trying to educate us about global warming – an eclectic and interesting bunch.

Most of the people in this community see their role as dispelling ignorance, or motivating the slothful. Yet in most of the cases we discussed, the public get risk wrong because powerful interests make a serious effort to scare them about some of life’s little hazards, or to reassure them about others. When this is put to the risk communication folks in a question – whether after a talk or in the corridor – they readily admit they’re up against a torrent of misleading marketing. But they don’t see what they’re doing as adversarial, and I strongly suspect that many risk interventions are less effective as a result.

In my talk (slides) I set this out as simply and starkly as I could. We spend too much on terrorism, because both the terrorists and the governments who’re supposed to protect us from them big up the threat; we spend too little on cybercrime, because everyone from the crooks through the police and the banks to the computer industry has their own reason to talk down the threat. I mentioned recent cases such as Wannacry as examples of how institutions communicate risk in self-serving, misleading ways. I discussed our own study of browser warnings, which suggests that people at least subconsciously know that most of the warnings they see are written to benefit others rather than them; they tune out all but the most specific.

What struck me with some force when preparing my talk, though, is that there’s just nobody in academia who takes a holistic view of adversarial risk communication. Many people look at some small part of the problem, from David Rios’ game-theoretic analysis of adversarial risk through John Mueller’s studies of terrorism risk and Alessandro Acquisti’s behavioural economics of privacy, through to criminologists who study pathways into crime and psychologists who study deception. Of all these, the literature on deception might be the most relevant, though we should also look at politics, propaganda, and studies of why people stubbornly persist in their beliefs – including the excellent work by Bénabou and Tirole on the value people place on belief. Perhaps the professionals whose job comes closest to adversarial risk communication are political spin doctors. So when should we talk about new facts, and when should we talk about who’s deceiving you and why?

Given the current concern over populism and the role of social media in the Brexit and Trump votes, it might be time for a more careful cross-disciplinary study of how we can change people’s minds about risk in the presence of smart and persistent adversaries. We know, for example, that a college education makes people much less susceptible to propaganda and marketing; but what is the science behind designing interventions that are quicker and cheaper in specific circumstances?

15:04

News about the x64 edition

Sorry for the long silence since IDA v6.95, we all were incredibly busy with the transition to the 64-bit version. We are happy to say now that we are close to the finish line and will announce the beta test soon.

Transition to x64 itself was not that hard. We have been compiling IDA in x64 mode since many years, so making it actually work was a piece of cake.

It was much more time consuming to clean up the API: make it more logical, easier to use, and even remove some obsolete stuff that we have been carrying around since ages. Switching to the x64 is the unique opportunity for this cleanup because there are no existing x64 plugins yet and we won’t be breaking any working plugins. We hope that you will like the new API much more.

Just to give you an idea: we made more than 8000 commits to our source code repository and just the commit descriptions are more than 1MB. We reached 1.6M lines of source code without taking into account any third party or auto-generated files. I haven’t counted how many lines had changed since v6.95, but it can easily be that every second line got modified during the transition. In short, it was a huge undertaking.

While it was huge, we did not manage to make everything ideal (is it possible at all?…) We will continue to work on the API in the future.

Naturally, we were also busy fixing our past bugs (309 bugs since the public release of v6.95, to be exact; fortunately almost all of them reveal themselves in very specific circumstances).

We were also working on new features. Just to give you an idea of the new stuff, see very short descriptions below.

First of all, IDA fully switched to UTF-8. It will be possible to use Unicode strings everywhere, and specify any specific encoding for a string in the input file. The databases will be kept in UTF-8 as well, which will allow us to get rid of inconsistencies between Windows and Unix versions of IDA. In fact it is deeper that a simple support of UTF-8, we have a new system for international character support but it deserves a separate blog entry. We will talk about it in more detail after the release.

We will release the PowerPC 64-bit decompiler along with IDA v7. This assembler code (which did not fit into the screenshot):

Gets converted into one line:

We added support for exception handling. IDA recognizes try/except blocks and neatly comments them in the listing:

The iOS debugger improves support for debugging dylibs from dyld_shared_cache and adds support for source code level debugging.

We have tons of other improvements: better GDBServer support, updated FLAIR signatures, improved decompiler heuristics, updated built-in functions for IDAPython and IDC, new switch table patterns, etc.

Stay tuned, we will announce the beta test soon!

June 08 2017

11:24

Second Annual Cybercrime Conference

The Cambridge Cybercrime Centre is organising another one day conference on cybercrime on Thursday, 13th July 2017.

In future years we intend to focus on research that has been carried out using datasets provided by the Cybercrime Centre, but just as last year (details here, liveblog here) we have a stellar group of invited speakers who are at the forefront of their fields:

They will present various aspects of cybercrime from the point of view of criminology, policy, security economics, law and policing.

This one day event, to be held in the Faculty of Law, University of Cambridge will follow immediately after (and will be in the same venue as) the “Tenth International Conference on Evidence Based Policing” organised by the Institute of Criminology which runs on the 11th and 12th July 2016.

Full details (and information about booking) is here.

June 01 2017

13:20

When safety and security become one

What happens when your car starts getting monthly upgrades like your phone and your laptop? It’s starting to happen, and the changes will be profound. We’ll be able to improve car safety as we learn from accidents, and fixing a flaw won’t mean spending billions on a recall. But if you’re writing navigation code today that will go in the 2020 Landrover, how will you be able to ship safety and security patches in 2030? In 2040? In 2050? At present we struggle to keep software patched for three years; we have no idea how to do it for 30.

Our latest paper reports a project that Éireann Leverett, Richard Clayton and I undertook for the European Commission into what happens to safety in this brave new world. Europe is the world’s lead safety regulator for about a dozen industry sectors, of which we studied three: road transport, medical devices and the electricity industry.

Up till now, we’ve known how to make two kinds of fairly secure system. There’s the software in your phone or laptop which is complex and exposed to online attack, so has to be patched regularly as vulnerabilities are discovered. It’s typically abandoned after a few years as patching too many versions of software costs too much. The other kind is the software in safety-critical machinery which has tended to be stable, simple and thoroughly tested, and not exposed to the big bad Internet. As these two worlds collide, there will be some rather large waves.

Regulators who only thought in terms of safety will have to start thinking of security too. Safety engineers will have to learn adversarial thinking. Security engineers will have to think much more about ease of safe use. Educators will have to start teaching these subjects together. (I just expanded my introductory course on software engineering into one on software and security engineering.) And the policy debate will change too; people might vote for the FBI to have a golden master key to unlock your iPhone and read your private messages, but they might be less likely to vote them a master key to take over your car or your pacemaker.

Researchers and software developers will have to think seriously about how we can keep on patching the software in durable goods such as vehicles for thirty or forty years. It’s not acceptable to recycle cars after seven years, as greedy carmakers might hope; the embedded carbon cost of a car is about equal to its lifetime fuel burn, and reducing average mileage from 200,000 to 70,000 would treble the car industry’s CO2 emissions. So we’re going to have to learn how to make software sustainable. How do we do that?

Our paper is here; there’s a short video here and a longer video here.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl