Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

October 18 2019

11:09

UKRI Digital Security by Design: A £190M research programme around Arm’s Morello – an experimental ARMv8-A CPU, SoC, and board with CHERI support

PIs: Robert N. M. Watson (Cambridge), Simon W. Moore (Cambridge), Peter Sewell (Cambridge), and Peter G. Neumann (SRI)

Since 2010, SRI International and the University of Cambridge, supported by DARPA, have been developing CHERI: a capability-system extension to RISC Instruction-Set Architectures (ISAs) supporting fine-grained memory protection and scalable compartmentalization .. while retaining incremental deployability within current C and C++ software stacks. This ten-year research project has involved hardware-software-semantic co-design: FPGA prototyping, compiler development, operating-system development, and application adaptation, as well as formal modeling and proof. Extensively documented in technical reports and research papers, we have iterated on CHERI as we evaluated and improved microarchitectural overheads, performance, software compatibility, and security.

As we know, mainstream computer systems are still chronically insecure. One of the main reasons for this is that conventional hardware architectures and C/C++ language abstractions, dating back to the 1970s, provide only coarse-grained memory protection. Without memory safety, many coding errors turn into exploitable security vulnerabilities. In our ASPLOS 2019 paper on CheriABI (best paper award), we demonstrated that a complete UNIX userspace and application suite could be protected by strong memory safety with minimal source-code disruption and acceptable performance overheads. Scalable software compartmentalization offers mitigation for future unknown classes of vulnerabilities by enabling greater use of design patterns such as software sandboxing. Our An Introduction to CHERI technical report introduces our approach including the architecture, microarchitectural contributions, formal models, software protection model, and practical software adaptation. The CHERI ISA v7 specification is the authoritative reference to the architecture, including both the architecture-neutral protection model and its concrete mappings into the 64-bit MIPS and 32/64-bit RISC-V ISAs. Our Rigorous Engineering technical report describes our modelling and mechanised proof of key security properties.

Today, we are very excited to be able to talk about another long-running aspect of our DARPA-supported work: A collaboration since 2014 with engineers at Arm to create an experimental adaptation of CHERI to the ARMv8-A architecture. This widely used ISA is the foundation for the vast majority of mobile phones and tablets, including those running iOS and Android. The £170M UKRI program Digital Security by Design (DSbD) was announced in late September 2019 to explore potential applications of CHERI — with a £70M investment by UKRI, and a further £117M from industry including involvement by Arm, Microsoft, and Google. Today, UKRI and Arm announced that the Arm Morello board will become available from 2021: Morello is a prototype 7nm high-end multi-core superscalar ARMv8-A processor (based on Arm’s Neoverse N1), SoC, and board implementing experimental CHERI extensions. As part of this effort, the UK Engineering and Physical Sciences Research Council (EPSRC) has also announced a new £8M programme to fund UK academics to work with Morello. Arm will release their Morello adaptation of our CHERI Clang/LLVM toolchain, and we will release a full adaptation of our open-source CHERI reference software stack to Morello (including our CheriBSD operating system and application suite) as foundations for research and prototyping on Morello. Watch the DSbD workshop videos from Robert Watson (Cambridge), Richard Grisenthwaite (Arm), and Manuel Costa (Microsoft) on CHERI and Morello, which are linked below, for more information.

This is an incredible opportunity to validate the CHERI approach, with accompanying systems software and formal verification, through an industrial scale and industrial quality hardware design, and to broaden the research community around CHERI to explore its potential impact. You can read the announcements about Morello here:

Recordings of several talks on CHERI and Morello are now available from the ISCF Digital Security by Design Challenge Collaborators’ Workshop (26 September 2019), including:

  • Robert Watson (Cambridge)’s talk on CHERI, and on our transition collaboration with Arm (video) (slides)
  • Richard Grisenthwaite (Arm)’s talk on the Morello board and CHERI transition (video) (slides)
  • Manuel Costa (Microsoft)’s talk on memory safety and potential opportunities arising with CHERI and Morello (video)

In addition, we are maintaining a CHERI DSbD web page with background information on CHERI, announcements regarding Morello, links to DSbD funding calls, and information regarding software artefacts, formal models, and so on. We will continue to update that page as the programme proceeds.

This has been possible through the contributions of the many members of the CHERI research team over the last ten years, including: Hesham Almatary, Jonathan Anderson, John Baldwin, Hadrien Barrel, Thomas Bauereiss, Ruslan Bukin, David Chisnall, James Clarke, Nirav Dave, Brooks Davis, Lawrence Esswood, Nathaniel W. Filardo, Khilan Gudka, Brett Gutstein, Alexandre Joannou, Robert Kovacsics, Ben Laurie, A. Theo Markettos, J. Edward Maste, Marno van der Maas, Alfredo Mazzinghi, Alan Mujumdar, Prashanth Mundkur, Steven J. Murdoch, Edward Napierala, Kyndylan Nienhuis, Robert Norton-Wright, Philip Paeps, Lucian Paul-Trifu, Alex Richardson, Michael Roe, Colin Rothwell, Peter Rugg, Hassen Saidi, Stacey Son, Domagoj Stolfa, Andrew Turner, Munraj Vadera, Jonathan Woodruff, Hongyan Xia, and Bjoern A. Zeeb.

Approved for public release; distribution is unlimited. This work was supported by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL), under contract FA8750-10-C-0237 (CTSRD), with additional support from FA8750-11-C-0249 (MRC2), HR0011-18-C-0016 (ECATS), and FA8650-18-C-7809 (CIFV) as part of the DARPA CRASH, MRC, and SSITH research programs. The views, opinions, and/or findings contained in this report are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. We also acknowledge the EPSRC REMS Programme Grant (EP/K008528/1), the ERC ELVER Advanced Grant (789108), the Isaac Newton Trust, the UK Higher Education Innovation Fund (HEIF), Thales E-Security, Microsoft Research Cambridge, Arm Limited, Google, Google DeepMind, HP Enterprise, and the Gates Cambridge Trust.

October 11 2019

08:57

Online suicide games: a form of digital self-harm or a myth?

By Maria Bada & Richard Clayton

October is ‘Cyber Security Month’, and you will see lots of warnings and advice about how to keep yourself safe online. Unfortunately, not every warning is entirely accurate and particularly egregious examples are warnings about ‘suicide games’ which are said to involve an escalating series of challenges ending in suicide.

Here at the Cambridge Cybercrime Centre, we’ve been looking into suicide games by interviewing teachers, child protection experts and NGOs; and by tracking mentions of games such as the ‘Blue Whale Challenge’ and ‘Momo’ in news stories and on UK Police and related websites.

We found that the stories about online suicide games have no discernable basis in fact and are linked to misperceptions about actual suicides. A key finding of our work is that media, social media and well-meaning (but baseless) warning releases by authorities are spreading the challenge culture and exaggerating fears.

To clarify, virally spreading challenges are real and some are unexpectedly dangerous such as the salt and ice challenge, the cinnamon challenge and more recently skin embroidery. Very sadly of course suicides are also real – but we are convinced that the combination has no basis in fact.

We’re not alone in our belief. Snopes investigated Blue Whale in 2017 and deemed the story ‘unproven’, while in 2019 the BBC posted a detailed history of Blue Whale showing there was no record of such a game prior to a single Russian media article of dubious accuracy. The UK Safer Internet Centre calls the claims around Momo ‘fake news’, while YouTube has found no evidence to support the claim that there are videos showing or promoting Momo on its platform.

Regardless of whether a challenge is dangerous or not, youngsters are especially motivated to take part, presumably because of a desire for attention and curiosity. The ‘challenge culture’ is a deeply rooted online phenomenon. Young people are constantly receiving media messages and new norms which not only inform their thinking, but also their values and beliefs. 

Although there is no evidence that the suicide games are ‘real’, authorities around the world have reacted by releasing warnings and creating information campaigns to warn youngsters and parents about the risks. However, a key concern when discussing, or warning of, suicide games is that this drives children towards the very content of concern and raises the risk of ‘suicide contagion’, which could turn stories into a tragic self-fulfilling prophecy for a small number of vulnerable youths.

Understanding what media content really means, what its source is and why a certain message has been constructed, is crucial for quality understanding and recognition of media mediated messages and their meaning. Adequate answers to all these questions can only be acquired by media literacy. However, in most countries media education is still a secondary activity that teachers or media educators deal with without training or proper material. 

Our research recommends that policy measures are taken such as: a) awareness and education to ensure that young people can handle risks online and offline; b) development of national and international strategies and guidelines for suicide prevention and how the news related to suicides is shown in media and social media; c) development of social media and media literacy; d) collaborative efforts of media, legal systems and education to prevent suicides; e) guidance for quality control of warning releases by authorities.

Maria Bada presented this work on 24-26th June 2019, at the 24th Annual CyberPsychology, CyberTherapy & Social Networking Conference (CYPSY24) in Norfolk, Virginia, USA. Click here  to access the abstract of this paper – the full version of the paper is currently in peer review and should be available soon.

October 08 2019

15:29

August 23 2019

09:47

Usability of Cybercrime Datasets

By Ildiko Pete and Yi Ting Chua

The availability of publicly accessible datasets plays an essential role in the advancement of cybercrime and cybersecurity as a field. There has been increasing effort to understand how datasets are created, classified, shared, and used by scholars. However, there has been very few studies that address the usability of datasets. 

As part of an ongoing project to improve the accessibility of cybersecurity and cybercrime datasets, we conducted a case study that examined and assessed the datasets offered by the Cambridge Cybercrime Centre (CCC). We examined two stages of the data sharing process: dataset sharing and dataset usage. Dataset sharing refers to three steps: (1) informing potential users of available datasets, (2) providing instructions on application process, and (3) granting access to users. Dataset usage refers to the process of querying, manipulation and extracting data from the dataset. We were interested in assessing users’ experiences with the data sharing process and discovering challenges and difficulties when using any of the offered datasets. 

To this end, we reached out to 65 individuals who applied for access to the CCC’s datasets and are potentially actively using the datasets. The survey questionnaire was administered via Qualtrics. We received sixteen responses, nine of which were fully completed. The responses to open-ended questions were transcribed, and then we performed thematic analysis.

As a result, we discovered two main themes. The first theme is users’ level of technological competence, and the second one is users’ experiences. The findings revealed generally positive user experiences with the CCC’s data sharing process and users reported no major obstacles with regards to the dataset sharing stage. Most participants have accessed and used the CrimeBB dataset, which contains more than 48 million posts. Users also expressed that they are likely to recommend the dataset to other researchers. During the dataset usage phase, users reported some technical difficulties. Interestingly, these technical issues were specific, such as version conflicts. This highlights that users with a higher level of technical skills also experience technical difficulties, however these are of different nature in contrast to generic technical challenges. Nonetheless, the survey shown the CCC’s success in sharing their datasets to a sub-set of cybercrime and cybersecurity researchers approached in the current study. 

Ildiko Pete presented the preliminary findings on 12thAugust at CSET’19. Click here to access the full paper. 

August 17 2019

08:36

Rashomon of disclosure

In a world of changing technology, there are few constants - but if there is one constant in security, it is the rhythmic flare-up of discussions about disclosure on the social-media-du-jour (mailing lists in the past, now mostly Twitter and Facebook).

Many people in the industry have wrestled with, and contributed to, the discussions, norms, and modes of operation - I would particularly like to highlight contributions by Katie Moussouris and Art Manion, but there are many that would deserve mentioning whose true impact is unknown outside a small circle. In all discussions of disclosure, it is important to keep in mind that many smart people have struggled with the problem. There may not be easy answers.

In this blog post, I would like to highlight a few aspects of the discussion that are important to me personally - aspects which influenced my thinking, and which are underappreciated in my view.

I have been on many (but not most) sides of the table during my career:
  • On the side of the independent bug-finder who reports to a vendor and who is subsequently threatened.
  • On the side of the independent bug-finder that decided reporting is not worth my time.
  • On the side of building and selling a security appliance that handles malicious input and that needs to be built in a way that we do not add net exposure to our clients.
  • On the side of building and selling a software that clients install.
  • On the side of Google Project Zero, which tries to influence the industry to improve its practices and rectify some of the bad incentives.
The sides of the table that are notably missing here are the role of the middle- or senior-level manager that makes his living shipping software on a tight deadline and who is in competition for features, and the role of the security researcher directly selling bugs to governments. I will return to this in the last section.

I expect almost every reader will find something to vehemently disagree with. This is expected, and to some extent, the point of this blog post.

The simplistic view of reporting vulnerabilities to vendors

I will quickly describe the simplistic view of vulnerability reporting / patching. It is commonly brought up in discussions, especially by folks that have not wrestled with the topic for long. The gist of the argument is:
  1. Prior to publishing a vulnerability, the vulnerability is unknown except to the finder and the software vendor.
  2. Very few, if any, people are at risk while we are in this state.
  3. Publishing about the information prior to the vendor publishing a patch puts many people at risk (because they can now be hacked). This should hence not happen.
Variants of this argument are used to claim that no researcher should publish vulnerability information before patches are available, or that no researcher should publish information until patches are applied, or that no researcher should publish information that is helpful for building exploits.

This argument, at first glance, is simple, plausible, and wrong. In the following, I will explain the various ways in which this view is flawed.

The Zardoz experience

For those that joined Cybersecurity in recent years: Zardoz was a mailing list on which "whitehats" discussed and shared security vulnerabilities with each other so they could be fixed without the "public" knowing about them.
The result of this activity was: Every hacker and active intelligence shop at the time wanted to have access to this mailing list (because it would regularly contain important new attacks). They generally succeeded. Quote from the Wikipedia entry on Zardoz:
On the other hand, the circulation of Zardoz postings among computer hackers was an open secret, mocked openly in a famous Phrack parody of an IRC channel populated by notable security experts.[3]
History shows, again and again, that small groups of people that share vulnerability information ahead of time always have at least one member compromised; there are always attackers that read the communication.

It is reasonably safe to assume that the same holds for the email addresses to which security vulnerabilities are reported. These are high-value targets, and getting access to them (even if it means physical tampering or HUMINT) is so useful that well-funded persistent adversaries need to  be assumed to have access to them. It is their job, after all.

(Zardoz isn't unique. Other examples are unfortunately less-well documented. Mail spools of internal mailing lists of various CERTs were circulated in hobbyist hacker circles in the early 2000s, and it is safe to assume that any dedicated intelligence agency today can reproduce that level of access.)

The fallacy of uniform risk

Risk is not uniformly distributed throughout society. Some people are more at-risk than others: Dissidents in oppressive countries, holders of large quantities of cryptocurrency, people who think their work is journalism when the US government thinks their work is espionage, political stakeholders and negotiators. Some of them face quite severe consequences from getting hacked, ranging from mild discomfort to death.

The majority of users in the world are much less at risk: The worst-case scenario for them, in terms of getting hacked, is inconvenience and a moderate amount of financial loss.
This means that the naive "counting" of victims in the original argument makes a false assumption:  Everybody has the same "things to lose" by getting hacked. This is not the case: Some people have their life and liberty at risk, but most people don't. For those that do not, it may actually be rational behavior to not update their devices immediately, or to generally not care much about security - why take precautions against an event that you think is either unlikely or largely not damaging to you?
For those at risk, though, it is often rational to be paranoid - to avoid using technology entirely for a while, to keep things patched, and to invest time and resources into keeping their things secure.
Any discussion of the pros and cons of disclosure should take into account that risk profiles vary drastically. Taking this argument to the extreme, the question arises: "Is it OK to put 100m people at risk of inconvenience if I can reduce the risk of death for 5 people?"
I do not have an answer for this sort of calculation, and given the uncertainty of all probabilities and data points in this, I am unsure whether one exists.

Forgetting about patch diffing

One of the lessons that our industry sometimes (and to my surprise) forgets is: Public availability of a patch is, from the attacker perspective, not much different than a detailed analysis of the vulnerability including a vulnerability trigger.
There used to be a cottage industry of folks that analyze patches and write reports on what the fixed bugs are, whether they were fixed correctly, and how to trigger them. They usually operated away from the spotlight, but that does not mean they do not exist - many were our customers.
People in the offensive business can build infrastructure that helps them rapidly analyze patches and get the information they need out of them. Defenders, mostly due to organizational and not technical reasons, can not do this. This means that in the absence of a full discussion of the vulnerability, defenders will be at a significant information disadvantage compared to attackers.
Without understanding the details of the vulnerability, networks and hosts cannot be monitored for its exploitation, and mitigations-other-than-patching cannot be applied.
Professional attackers, on the other hand, will have all the information about a vulnerability not long after they obtain a patch (if they did not have it beforehand already).

The fallacy of "do not publish triggers"

When publishing about a vulnerability, should "triggers", small pieces of data that hit the vulnerability and crash the program, be published?
Yes, building the first trigger is often time-consuming for an attacker. Why would we save them the time?
Well, because without a public trigger for a vulnerability, at least, it is extremely hard for defensive staff to determine whether a particular product in use may contain the bug in question. A prime example of this CVE-2012-6706: Everybody assumed that the vulnerability is only present on Sophos; no public PoC was provided. So nobody realized that the bug lived in upstream Unrar, and it wasn't until 2017 that it got re-discovered and fixed. 5 years of extra time for a bug because no trigger was published.
If you have an Antivirus Gateway running somewhere, or any piece of legacy software, you need at least a trigger to check whether the product includes the vulnerable software. If you are attempting at building any form of custom detection for an attack, you also need the trigger.

The fallacy of "do not publish exploits"

Now, should exploits be published? Clearly the answer should be no?
In my experience, even large organizations with mature security teams and programs often struggle to understand the changing nature of attacks. Many people that are now in management positions cut their teeth on (from today's perspective) relatively simple bugs, and have not fully understood or appreciated how exploitation has changed.
In general, defenders are almost always at an information disadvantage: Attackers will not tell them what they do, and gleefully applaud and encourage when the defender gets a wrong idea in his head about what to do. Read the declassified cryptolog_126.pdf Eurocrypt trip report to get a good impression of how this works.
Three of the last four sessions were of no value whatever, and indeed there was almost nothing at Eurocrypt to interest us (this is good news!). The scholarship was actually extremely good; it's just that the directions which external cryptologic researchers have taken are remarkably far from our own lines of interest. 
Defense has many resources, but many of them are misapplied: Mitigations performed that do not hold up to an attacker slightly changing strategies, products bought that do not change an attacker calculus or the exploit economics, etc.
A nontrivial part of this misapplication is information scarcity about real exploits. My personal view is that Project Zero's exploit write-ups, and the many great write-ups by the Pwn2Own competitors and other security research teams (Pangu and other Chinese teams come to mind) about the actual internal mechanisms of their exploits is invaluable to transmit understanding of actual attacks to defenders, and are necessary to help the industry stay on course.
Real exploits can be studied, understood, and potentially used by skilled defenders for both mitigation and detection, and to test other defensive measures.

The reality of software shipping and prioritization

Companies that sell software make their money by shipping new features. Managers in these organizations get promoted for shipping said features and reaching more customers. If they succeed in doing so, their career prospects are bright, and by the time the security flaws in the newly-shipped features become evident, they are four steps in the career ladder and two companies away from the risk they created.

The true cost of attack surface is not properly accounted for in modern software development (even if you have an SDLC); largely because this cost is shouldered by the customers that run the software - and even then, only by a select few that have unusual risk profiles.

A sober look at current incentive structures in software development shows that there is next to zero incentive for a team that ships a product to invest in security on a 4-5 year horizon. Everybody perceives themselves to be in breakneck competition, and velocity is prioritized. This includes bug reports: The entire reason for the 90-day deadline that Project Zero enforced was the fact that without a hard deadline, software vendors would routinely not prioritize fixing an obvious defect, because ... why would you distract yourself with doing it if you could be shipping features instead?

The only disincentive to adding new attack surface these days is getting heckled on a blog post or in a Blackhat talk. Has any manager in the software industry ever had their career damaged by shipping particularly broken software and incurring risks for their users? I know of precisely zero examples. If you know of one, please reach out, I would be extremely interested to learn more.

The tech industry as risk-taker on behalf of others

(I will use Microsoft as an example in the following paragraphs, but you can replace it with Apple or Google/Android with only minor changes. The tech giants are quite similar in this.)
Microsoft has made 248bn$+ in profits since 2005. In no year did they make less than 1bn$ in profits per month. Profits in the decade leading up to 2005 were lower, and I could not find numbers, but even in 2000 MS was raking in more than a billion in profits a quarter. And part of these profits were made by incurring risks on behalf of their customers - by making decisions to not properly sandbox the SMB components, by under-staffing security, by not deprecating and migrating customers away from insecure protocols. The software product industry (including mobile phone makers) has reaped excess profits for decades by selling risky products and offloading the risk onto their clients and society. My analogy is that they constructed financial products that yield a certain amount of excess return but blow up disastrously under certain geopolitical events, then sold some of the excess return and *all* of the risk to a third party that is not informed of the risk.
Any industry that can make profits while offloading the risks will incur excess risks, and regulation is required to make sure that those that make the profits also carry the risks. Due to historical accidents (the fact that software falls under copyright) and unwillingness to regulate the golden goose, we have allowed 30 years of societal-risk-buildup, largely driven by excess profits in the software and tech industry.
Now that MS (and the rest of the tech industry) has sold the rest of society a bunch of toxic paper that blows up in case of some geopolitical tail events (like the resurgence of great-power competition), they really do not wish to take the blame for it - after all, there may be regulation in the future, and they may have to actually shoulder some of the risks they are incurring.
What is the right solution to such a conundrum? Lobbying, and a concerted PR effort to deflect the blame. Security researchers, 0-day vendors, and people that happen to sell tools that could be useful to 0-day vendors are much more convenient targets than admitting: All this risk that is surfaced by security research and 0-day vendors is originally created for excess profit by the tech industry.
FWIW, it is rational for them to do so, but I disagree that we should let them do it :-). 

A right to know

My personal view on disclosure is influenced by the view that consumers have a right to get all available information about the known risks of the products they use. If an internal Tobacco industry study showed that smoking may cause cancer, that should have been public from day 1, including all data.

Likewise, consumers of software products should have access to all known information about the security of their product, all the time. My personal view is that the 90 days deadlines that are accepted these days are an attempt at balancing competing interests (availability of patches vs. telling users about the insecurity of their device).

Delaying much further or withholding data from the customer is - in my personal opinion - a form of deceit; my personal opinion is that the tech industry should be much more aggressive in warning users that under current engineering practices, their personal data is never fully safe in any consumer-level device. Individual bug chains may cost a million dollars now, but that million dollars is amortized over a large number of targets, so the cost-per-individual compromise is reasonably low.

I admit that my view (giving users all the information so that they can (at least in theory) make good decisions using all available information) is a philosophical one: I believe that withholding available information that may alter someone's decision is a form of deceit, and that consent (even in business relationships) requires transparency with each other. Other people may have different philosophies.

Rashomon, or how opinions are driven by career incentives

The movie Rashomon that gave this blog post the title is a beautiful black-and-white movie from 1950, directed by the famous Akira Kurosawa. From the Wikipedia page:
The film is known for a plot device that involves various characters providing subjective, alternative, self-serving, and contradictory versions of the same incident.
If you haven't seen it, I greatly recommend watching it.

The reason why I gave this blog post the title "Rashomon of disclosure" is to emphasize the complexity of the situation. There are many facets, and my views are strongly influenced by the sides of the table I have been on - and those I haven't been on.

Everybody participating in the discussion has some underlying interests and/or philosophical views that influence their argument.

Software vendors do not want to face up to generating excess profits by offloading risk to society. 0-day vendors do not want to face up to the fact that a fraction of their clients kills people (sometimes entirely innocent ones), or at least break laws in some jurisdiction. Security researchers want to have the right to publish their research, even if they fail to significantly impact the broken economics of security.

Everybody wants to be the hero of their own story, and in their own account of the state of the world, they are.

None of the questions surrounding vulnerability disclosure, vulnerability discovery, and the trade-offs involved in it are easy. People that claim there is an easy and obvious path to go about security vulnerability disclosure have either not thought about it very hard, or have sufficiently strong incentives to self-delude that there is one true way.

After 20+ years of seeing this debate go to and fro, my request to everybody is: When you explain to the world why you are the hero of your story, take a moment to reflect on alternative narratives, and make an effort to recognize that the story is probably not that simple.

August 01 2019

07:34

IDA 7.4: IDAPython and Python 3

IDA 7.4 will still ship with IDAPython for Python 2.7 by default, but users will now have the opportunity to pick IDAPython for Python 3.x at installation-time!

07:32

IDA 7.4: Turning off IDA 6.x compatibility in IDAPython by default

IDA 7.4 will ship with the IDAPython “IDA 6.x” compatibility layer off by default. Please see this article for more information!

July 16 2019

07:08

PETS 2019

I’m at PETS 20!9 and will try to liveblog some of the sessions in followups to this post. Note that there is also a livestream of the symposium.

July 12 2019

14:26

Hiring for the Cambridge Cybercrime Centre

We have just re-advertised a “post-doc” position in the Cambridge Cybercrime Centre: https://www.cambridgecybercrime.uk. The vacancy arises because Daniel is off to become a Chancellor’s Fellow at Strathclyde), the re-advertisement is because of a technical flaw in the previous advertising process (which is now addressed).

We are looking for an enthusiastic researcher to join us to work on our datasets of cybercrime activity, collecting new types of data, maintaining existing datasets and doing innovative research using our data. The person we appoint will define their own goals and objectives and pursue them independently, or as part of a team.

An ideal candidate would identify cybercrime datasets that can be collected, build the collection systems and then do cutting edge research on this data — whilst encouraging other academics to take our data and make their own contributions to the field.

We are not necessarily looking for existing experience in researching cybercrime, although this would be a bonus. However, we are looking for strong programming skills — and experience with scripting languages and databases would be much preferred. Good knowledge of English and communication skills are important.

Please follow this link to the advert to read the formal advertisement for the details about exactly who and what we’re looking for and how to apply — and please pay attention to our request that in the covering letter you create as part of the application you should explain which particular aspects of cybercrime research are of particular interest to you.

July 11 2019

10:53

2019 Cambridge Cybercrime Conference

I’m at the fourth Cambridge Cybercrime Conference, which I will try to liveblog in followups to this post.

July 10 2019

16:22

The lifetime of an Android API vulnerability

By Daniel Carter, Daniel Thomas, and Alastair Beresford

Security updates are an important mechanism for protecting users and their devices from attack, and therefore it’s important vendors produce security updates, and that users apply them. Producing security updates is particularly difficult when more than one vendor needs to make changes in order to secure a system.

We studied one such example in previous research (open access). The specific vulnerability (CVE-2012-6636) affected Android devices and allowed JavaScript running inside a WebView of an app (e.g. an advert) to run arbitrary code inside the app itself, with all the permissions of app. The vulnerability could be exploited remotely by an attacker who bought ads which supported JavaScript. In addition, since most ads at the time were served over HTTP, the vulnerability could also be exploited if an attacker controlled a network used by the Android device (e.g. WiFi in a coffee shop). The fix required both the Android operating system, and all apps installed on the handset, to support at least Android API Level 17. Thus, the deployment of an effective solution for users was especially challenging.

When we published our paper in 2015, we predicted that this vulnerability would not be patched on 95% of devices in the Android ecosystem until January 2018 (plus or minus a standard deviation of 1.23 years). Since this date has now passed, we decided to check whether our prediction was correct.

To perform our analysis we used data on deployed API versions taken from (almost) monthly snapshots of Google’s Android Distribution Dashboard which we have been tracking. The good news is that we found the operating system update requirements crossed the 95% threshold in May 2017, seven months earlier than our best estimate, and within one standard deviation of our prediction. The most recent data for May 2019 shows deployment has reached 98.2% of devices in use. Nevertheless, fixing this aspect of the vulnerability took well over 4 years to reach 95% of devices.

Proportion of devices safe from the JavaScript-to-Java vulnerability. For details how this is calculated, see our previous paper.Proportion of devices safe from the JavaScript-to-Java vulnerability. For details how this is calculated, see our previous paper.

Google delivered a further fix in Android 4.4.3 that blocked access to the getClass method from JavaScript, considerably reducing the risk of exploitation even from apps which were not updated. A conservative estimate of the deployment of this further fix is shown on the graph, reaching 95% adoption in April 2019. On the app side of things, Google has also been encouraging app developers to update. From 1st November 2018, updates to apps on Google Play must target API level 26 or higher and from November 1st 2019 updates to apps must target API level 28 or higher. This change forces the app-side changes necessary to fix this vulnerability. Unfortunately we don’t have good data on the distribution of apps installed on handsets, but we expect that most Android devices are now secure against this vulnerability.

Our work is not done however, and we are still looking into the security of mobile devices. This summer we are extending the work from our other 2015 paper Security Metrics for the Android Ecosystem where we analysed the composition of Android vulnerabilities. Last time we used distributions of deployed Android versions on devices from Device Analyzer (an Android measurement app we deployed to Google Play), the device management system of a FTSE 100 company, and User-Agent string data from an ISP in Rwanda. If you might be able to share similar data with us to support our latest research work then please get in touch: contact@androidvulnerabilities.org.

June 26 2019

14:34

Fourth Annual Cybercrime Conference

The Cambridge Cybercrime Centre is organising another one day conference on cybercrime on Thursday, 11th July 2019.

We have a stellar group of invited speakers who are at the forefront of their fields:

They will present various aspects of cybercrime from the point of view of criminology, policy, security economics and policing.

This one day event, to be held in the Faculty of Law, University of Cambridge will follow immediately after (and will be in the same venue as) the “12th International Conference on Evidence Based Policing” organised by the Institute of Criminology which runs on the 9th and 10th July 2018.

Full details (and information about booking) is here.

June 19 2019

11:08

IDA 7.3: CSS styling

Since version 7.3, IDA is styled using CSS. Please see this article to see what can be done, and how!

June 18 2019

09:05

IDA 7.3: Qt 5.6.3 configure options & patch

A handful of our users have already requested information regarding the Qt 5.6.3 build, that is shipped with IDA 7.3.

Configure options

Here are the options that were used to build the libraries on:

  • Windows: ...\5.6.3\configure.bat "-nomake" "tests" "-qtnamespace" "QT" "-confirm-license" "-accessibility" "-opensource" "-force-debug-info" "-platform" "win32-msvc2015" "-opengl" "desktop" "-prefix" "C:/Qt/5.6.3-x64"
    • Note that you will have to build with Visual Studio 2015 or newer, to obtain compatible libs
  • Linux: .../5.6.3/configure "-nomake" "tests" "-qtnamespace" "QT" "-confirm-license" "-accessibility" "-opensource" "-force-debug-info" "-platform" "linux-g++-64" "-developer-build" "-fontconfig" "-qt-freetype" "-qt-libpng" "-glib" "-qt-xcb" "-dbus" "-qt-sql-sqlite" "-gtkstyle" "-prefix" "/usr/local/Qt/5.6.3-x64"
  • Mac OSX: .../5.6.3/configure "-nomake" "tests" "-qtnamespace" "QT" "-confirm-license" "-accessibility" "-opensource" "-force-debug-info" "-platform" "macx-clang" "-debug-and-release" "-fontconfig" "-qt-freetype" "-qt-libpng" "-qt-sql-sqlite" "-prefix" "/Users/Shared/Qt/5.6.3-x64"

patch

In addition to the specific configure options, the Qt build that ships with IDA includes the following patch. You should therefore apply it to your own Qt 5.6.3 sources before compiling, in order to obtain similar binaries (patch -p 1 < path/to/qt-5_6_3_full-IDA73.patch)

Note that this patch should work without any modification, against the 5.6.3 release as found there. You may have to fiddle with it, if your Qt 5.6.3 sources come from somewhere else.

June 05 2019

14:38

SHB 2019 – Liveblog

I’ll be trying to liveblog the twelfth workshop on security and human behaviour at Harvard. I’m doing this remotely because of US visa issues, as I did for WEIS 2019 over the last couple of days. Ben Collier is attending as my proxy and we’re trying to build on the experience of telepresence reported here and here. My summaries of the workshop sessions will appear as followups to this post.

June 03 2019

13:45

WEIS 2019 – Liveblog

I’ll be trying to liveblog the seventeenth workshop on the economics of information security at Harvard. I’m not in Cambridge, Massachussetts, but in Cambridge, England, because of a visa held in ‘administrative processing’ (a fate that has befallen several other cryptographers). My postdoc Ben Collier is attending as my proxy (inspired by this and this).

May 30 2019

14:38

The Changing Cost of Cybercrime

In 2012 we presented the first systematic study of the costs of cybercrime. We have now repeated our study, to work out what’s changed in the seven years since then.

Measuring the Changing Cost of Cybercrime will appear on Monday at WEIS. The period has seen huge changes, with the smartphone replacing as PC and laptop as the consumer terminal of choice, with Android replacing Windows as the most popular operating system, and many services moving to the cloud. Yet the overall pattern of cybercrime is much the same.

We know a lot more than we did then. Back in 2012, we guessed that cybercrime was about half of all crime, by volume and value; we now know from surveys in several countries that this is the case. Payment fraud has doubled, but fallen slightly as a proportion of payment value; the payment system has got larger, and slightly more efficient.

So what’s changed? New cybercrimes include ransomware and other offences related to cryptocurrencies; travel fraud has also grown. Business email compromise and its cousin, authorised push payment fraud, are also growth areas. We’ve also seen serious collateral damage from cyber-weapons such as the NotPetya worm. The good news is that crimes that infringe intellectual property – from patent-infringing pharmaceuticals to copyright-infringing software, music and video – are down.

Our conclusions are much the same as in 2012. Most cyber-criminals operate with impunity, and we have to fix this. We need to put a lot more effort into catching and punishing the perpetrators.

Our new paper is here. For comparison the 2012 paper is here, while a separate study on the emotional cost of cybercrime is here.

May 21 2019

20:56

Calibration Fingerprint Attacks for Smartphones

When you visit a website, your web browser provides a range of information to the website, including the name and version of your browser, screen size, fonts installed, and so on. Website authors can use this information to provide an improved user experience. Unfortunately this same information can also be used to track you. In particular, this information can be used to generate a distinctive signature, or device fingerprint, to identify you.

A device fingerprint allows websites to detect your return visits or track you as you browse from one website to the next across the Internet. Such techniques can be used to protect against identity theft or credit card fraud, but also allow advertisers to monitor your activities and build a user profile of the websites you visit (and therefore a view into your personal interests). Browser vendors have long worried about the potential privacy invasion from device fingerprinting and have included measures to prevent such tracking. For example, on iOS, the Mobile Safari browser uses Intelligent Tracking Prevention to restrict the use of cookies, prevent access to unique device settings, and eliminate cross-domain tracking.

We have developed a new type of fingerprinting attack, the calibration fingerprinting attack. Our attack uses data gathered from the accelerometer, gyroscope and magnetometer sensors found in smartphones to construct a globally unique fingerprint. Our attack can be launched by any website you visit or any app you use on a vulnerable device without requiring any explicit confirmation or consent from you. The attack takes less than one second to generate a fingerprint which never changes, even after a factory reset. This attack therefore provides an effective means to track you as you browse across the web and move between apps on your phone.

One-minute video providing a demo and describing how the attack works

Our approach works by carefully analysing the data from sensors which are accessible without any special permissions on both websites and apps. Our analysis infers the per-device factory calibration data which manufacturers embed into the firmware of the smartphone to compensate for systematic manufacturing errors. This calibration data can then be used as the fingerprint.

In general, it is difficult to create a unique fingerprint on iOS devices due to strict sandboxing and device homogeneity. However, we demonstrated that our approach can produce globally unique fingerprints for iOS devices from an installed app: around 67 bits of entropy for the iPhone 6S. Calibration fingerprints generated by a website are less unique (around 42 bits of entropy for the iPhone 6S), but they are orthogonal to existing fingerprinting techniques and together they are likely to form a globally unique fingerprint for iOS devices. Apple adopted our proposed mitigations in iOS 12.2 for apps (CVE-2019-8541). Apple recently removed all access to motion sensors from Mobile Safari by default.

We presented this work on 21st May at IEEE Symposium on Security and Privacy 2019. For more details, please visit the SensorID website and read our paper:

Jiexin Zhang, Alastair R. Beresford and Ian Sheret, SensorID: Sensor Calibration Fingerprinting for Smartphones, Proceedings of the 40th IEEE Symposium on Security and Privacy (S&P), 2019.

May 17 2019

12:48

Security Engineering: Third Edition

I’m writing a third edition of my best-selling book Security Engineering. The chapters will be available online for review and feedback as I write them.

Today I put online a chapter on Who is the Opponent, which draws together what we learned from Snowden and others about the capabilities of state actors, together with what we’ve learned about cybercrime actors as a result of running the Cambridge Cybercrime Centre. Isn’t it odd that almost six years after Snowden, nobody’s tried to pull together what we learned into a coherent summary?

There’s also a chapter on Surveillance or Privacy which looks at policy. What’s the privacy landscape now, and what might we expect from the tussles over data retention, government backdoors and censorship more generally?

There’s also a preface to the third edition.

As the chapters come out for review, they will appear on my book page, so you can give me comment and feedback as I write them. This collaborative authorship approach is inspired by the late David MacKay. I’d suggest you bookmark my book page and come back every couple of weeks for the latest instalment!

May 16 2019

16:00

Hiring for the Cambridge Cybercrime Centre

We have yet another “post-doc” position in the Cambridge Cybercrime Centre: https://www.cambridgecybercrime.uk (for the happy reason that Daniel is off to become a Chancellor’s Fellow at Strathclyde).

We are looking for an enthusiastic researcher to join us to work on our datasets of cybercrime activity, collecting new types of data, maintaining existing datasets and doing innovative research using our data. The person we appoint will define their own goals and objectives and pursue them independently, or as part of a team.

An ideal candidate would identify cybercrime datasets that can be collected, build the collection systems and then do cutting edge research on this data — whilst encouraging other academics to take our data and make their own contributions to the field.

We are not necessarily looking for existing experience in researching cybercrime, although this would be a bonus. However, we are looking for strong programming skills — and experience with scripting languages and databases would be much preferred. Good knowledge of English and communication skills are important.

Please follow this link to the advert to read the formal advertisement for the details about exactly who and what we’re looking for and how to apply — and please pay attention to our request that in the covering letter you create as part of the application you should explain which particular aspects of cybercrime research are of particular interest to you.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl