Apple Watch provides murder case clues

Police in Australia have presented data gathered from an Apple Watch as evidence in a murder trial.

Grandmother Myrna Nilsson was wearing the device when she was killed in 2016.

Her daughter-in-law Caroline Nilsson is accused of staging an ambush, after claiming she was tied up by a group of men who entered the house.

But data from the victim’s smartwatch suggests that she was ambushed as she arrived home, and died hours earlier than Ms Nilsson claims.


Ms Nilsson told police that her mother-in-law had been followed home by a group of men in a car.

According to ABC News, Ms Nilsson said her mother-in-law had argued with the men outside the house for about 20 minutes, but she did not hear the fatal attack because she was in the kitchen with the door closed.

A neighbour called the police when Ms Nilsson emerged from the house gagged and distressed after 22:00.

Ms Nilsson says the attackers had tied her up and that she had made her way out of the house as soon as they had left.

But prosecutor Carmen Matteo said evidence from the victim’s smartwatch suggested Ms Nilsson had staged the home invasion.

The body of 57-year-old Myrna Nilsson was found in the laundry room of her home in Valley View, Adelaide, in September 2016.


“The evidence from the Apple Watch is a foundational piece of evidence for demonstrating the falsity of the defendant’s account to police,” said Ms Matteo.

“A watch of this type… contains sensors capable of tracking the movement and rate of movement of the person wearing it… it also measures the heart rate.”

The prosecution alleged that the watch had recorded data consistent with a person going into shock and losing consciousness.

“The deceased must have been attacked at around 6:38pm and had certainly died by 6:45pm,” she said.

“If that evidence is accepted, it tends to contradict the accused’s version of an argument occurring between the deceased and these men outside the laundry for a period of up to 20 minutes.

“Her emergence from the house was well after 10:00pm and if the Apple Watch evidence is accepted, that is over three hours after the attack on the deceased.”

Magistrate Oliver Koehn denied Ms Nilsson bail based on the “apparent strength of the prosecution’s case”. The trial will continue in June.

Brain back-up firm Nectome loses link to MIT

A company attempting to map people’s brains so their memories can be stored in computers has lost its link to one of the United States’ top universities.

US start-up Nectome revealed its brain back-up plan last month, warning at the time that the process involved would be “100% fatal”.

A number of neuroscientists subsequently poured scorn on the plan.

The Massachusetts Institute of Technology (MIT) has now announced that it is severing ties with the project.

One of the university’s professors had previously benefitted from a federal grant given to Nectome and was attempting to combine its work with his own research into mouse brains.

“Neuroscience has not sufficiently advanced to the point where we know whether any brain preservation method is powerful enough to preserve all the different kinds of biomolecules related to memory and the mind,” said the MIT in a blog explaining its decision.

Nectome has responded saying: “We appreciate the help MIT has given us, understand their choice, and wish them the best.”

‘Potential to benefit humanity’

The university’s in-house publication MIT Technology Review had been first to draw attention to Nectome’s plans and the educational establishment’s own involvement.

It has since reported that the collaboration had attracted “sharp criticism” from experts in the field, who feared it lent credibility to an effort that was doomed to fail.

“Fundamentally, the company is based on a proposition that is just false,” said Sten Linnarsson of the Karolinska Institute in Sweden.

“[And there’s a risk] some people actually kill themselves to donate their brains.”

Nectome had previously said that it believed it would one day be possible to survey connectome – the neural connections within the brain – to such a detailed degree that it would be able to reconstruct a person’s memories.

In order to achieve this, the brain must be preserved at point of death – a process called vitrifixation.

MIT Technology Review had reported that the firm was soon hoping to test its theories on the head of someone planning a doctor-assisted suicide.

However, Nectome has acknowledged that its work is at a relatively early stage.

“We believe that clinical human brain preservation has immense potential to benefit humanity, but only if it is developed in the light, with input from medical and neuroscience experts,” it said in a statement posted to its website.

“We believe that rushing to apply vitrifixation today would be extremely irresponsible and hurt eventual adoption of a validated protocol.”

Despite the sceptics, Nectome has won a $960,000 (£687,000) grant from the US National Institute of Mental Health.

It is also backed by Y Combinator, a high-profile Silicon Valley-based funder that previously invested in Dropbox, Airbnb and Reddit among others.

HTC’s Tom Daley swimming pool selfie advert banned

An advert showing British Olympic diver Tom Daley using a smartphone at a swimming pool has been banned on the grounds that similar behaviour by consumers would damage the device.

HTC has promoted the ad on social media since mid-2017.

But an investigation by the UK’s advertising watchdog discovered that the device’s own instructions said the phone should not come into contact with pool water.

HTC apologised on Thursday.

The Taiwanese firm also removed the promotion from its YouTube channel.

In a statement, it said: “We are disappointed by the ASA ruling, but have removed the video from our sites.

“We apologise if anyone felt misled as to the handset’s water resistant capabilities.”

  • Amazon listings ‘misled’ customers over savings
  • Poundland ‘naughty’ elf ad deemed ‘irresponsible’
  • ‘Scientific’ ad by eHarmony banned
  • Geordie Shore star broke ad rules on Snapchat

    The advert was designed to highlight the HTC U11’s squeezable sides, which – when pressed – trigger a photo from its front “selfie” camera.

    It showed Mr Daley repeatedly jumping from the highest platform at a swimming pool, and taking images of himself as he fell.

    In addition, the athlete was shown using the phone as he climbed out of the water.

    HTC defended the campaign on the grounds that the device’s IP67 waterpoof rating meant it could be briefly submerged up to a depth of 1m (3.3ft).

    It said that because Mr Daley had entered the water with his feet and held the phone above his head, the phone had not gone any deeper.

    Furthermore, an on-screen warning had told viewers not to “try this stunt”.

    But the Advertising Standards Authority said that a normal member of the public attempting something similar would be unlikely to be able to prevent their phone sinking below 1m.

    The watchdog also noted that HTC had acknowledged that “there were too many variations of water temperature and chemical composition” to be able to say that the U11 could be used in most swimming pools.

    And it highlighted that the device’s own instructions said it should not be intentionally submerged in water, and were that to happen by accident its buttons should not be pressed immediately afterwards.

    As a result it judged the ad to have exaggerated the phone’s capabilities in a misleading manner.

    Still online

    Although the original complaint had been about the appearance of the ad on Facebook, the ASA said that other posts must now be dealt with.

    “Our ruling against HTC applies across media,” a spokesman told the BBC.

    “We expect HTC to ensure its ad is removed from all media and we’ll be contacting the company to remind them of that.”

    The BBC understands that the firm intends to remove the ad from all its global accounts by the end of the day.

    The ASA also issued other tech-related rulings among its latest decisions, including:

    • a ban of an ad for the OnePlus 5 smartphone, which had featured “excessive gore”
    • finding that four listings on Amazon’s UK website had misrepresented the size of the discounts offered on electronic goods
    • agreeing that a TV ad by Barclays Bank had misleadingly implied that if a website featured a green padlock symbol in its browser address bar then it could be trusted
    • judging that the charity Electrosensitivity-UK held inadequate evidence to back up claims in a poster that wi-fi and other types of electromagnetic radiation posed health risks

Facebook ‘ugly truth’ growth memo haunts firm

A Facebook executive’s memo that claimed the “ugly truth” was that anything it did to grow was justified has been made public, embarrassing the company.

The 2016 post said that this applied even if it meant people might die as a result of bullying or terrorism.

Both the author and the company’s chief executive, Mark Zuckerberg, have denied they actually believe the sentiment.

But it risks overshadowing the firm’s efforts to tackle an earlier scandal.

Facebook has been under intense scrutiny since it acknowledged that it had received reports that a political consultancy – Cambridge Analytica – had not destroyed data harvested from about 50 million of its users years earlier.

The memo was first made public by the Buzzfeed news site, and was written by Andrew Bosworth.

The 18 June 2016 memo:

So we connect more people.

That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack co-ordinated on our tools.

And still we connect people.

The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.


That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.

Read the post in full

Mr Bosworth – who co-invented Facebook’s News Feed – has held high-level posts at the social network since 2006, and is currently in charge of its virtual reality efforts.

Mr Bosworth has since tweeted that he “didn’t agree” with the post at the time he had posted it, but he had shared it with staff to be “provocative.”

“Having a debate around hard topics like these is a critical part of our process and to do that effectively we have to be able to consider even bad ideas,” he added.

  • Facebook privacy settings revamped after scandal
  • Cambridge Analytica files spell out election tactics
  • Data row: Facebook’s Zuckerberg will not appear before MPs

    Mark Zuckerberg has issued his own statement.

    “Boz is a talented leader who says many provocative things,” it said.

    “This was one that most people at Facebook including myself disagreed with strongly. We’ve never believed the ends justify the means.”

    A follow-up report by the Verge revealed that dozens of Facebook’s employees have subsequently used its internal chat tools to discuss concerns that such material had been leaked to the media.


    By Rory Cellan-Jones, Technology correspondent

    What immediately struck me about this leaked memo was the line about “all the questionable contact importing practices”.

    When I downloaded my Facebook data recently, it was the presence of thousands of my phone contacts that startled me. But the company’s attitude seemed to be that this was normal and it was up to users to switch off the function if they didn’t like it.

    What we now know is that in 2016 a very senior executive thought this kind of data gathering was questionable.

    So, why is it only now that the company is having a debate about this and other dubious practices?

    Until now, Facebook has not been leaky. Perhaps we will soon get more insights from insiders as this adolescent business tries to grow up and come to terms with its true nature.

    Fact checking

    The disclosure coincided with Facebook’s latest efforts to address the public and investors’ concerns with its management.

    Its shares are trading about 14% lower than they were before the Cambridge Analytica scandal began, and several high profile figures have advocated deleting Facebook accounts.

    The company hosted a press conference on Thursday, at which it said it had:

    • begun fact-checking photos and videos posted in France, and would expand this to other countries soon
    • developed a new fake account investigative tool to prevent harmful election-related activities
    • started work on a public archive that will make it possible for journalists and others to investigate political-labelled ads posted to its platform

      In previous days it had also announced a revamp of its privacy settings, and said it would restrict the amount of data exchanged with businesses that collect information on behalf of advertisers.

      The latest controversy is likely, however, to provide added ammunition for critics.

      CNN reported earlier this week that Mr Zuckerberg had decided to testify before Congress “within a matter of weeks” after refusing a request to do so before UK MPs. However, the BBC has been unable to independently verify whether he answer questions in Washington.

Facebook’s Zuckerberg fires back at Apple’s Tim Cook

Facebook’s chief executive has defended his leadership following criticism from his counterpart at Apple.

Mark Zuckerberg said it was “extremely glib” to suggest that because the public did not pay to use Facebook that the firm did not care about them.

Last week, Apple’s Tim Cook said it was an “invasion of privacy” to traffic in users’ personal lives.

And when asked what he would do if he were Mr Zuckerberg, Mr Cook replied: “I wouldn’t be in that situation.”

Facebook has faced intense criticism after it emerged that it had known for years that Cambridge Analytica had harvested data from about 50 million of its users, but had relied on the political consultancy to self-certify that it had deleted the information.

Channel 4 News has since reported that at least some of the data in question is still in circulation despite Cambridge Analytica insisting it had destroyed the material.

Mr Zuckerberg was asked about Mr Cook’s comments during a lengthy interview given to news site Vox about the privacy scandal.

He also acknowledged that Facebook was still not transparent enough about some of the choices it had taken, and floated the idea of an independent panel being able to override some of its decisions.

‘Dire situation’

Mr Cook has spoken in public twice since Facebook’s data-mining controversy began.

On 23 March, he took part in the China Development Forum in Beijing.

“I think that this certain situation is so dire and has become so large that probably some well-crafted regulation is necessary,” news agency Bloomberg quoted him as saying in response to a question about the social network’s problems.

“The ability of anyone to know what you’ve been browsing about for years, who your contacts are, who their contacts are, things you like and dislike and every intimate detail of your life – from my own point of view it shouldn’t exist.”

  • Facebook haunted by ‘ugly truth’ memo
  • Facebook privacy settings revamped after scandal
  • Zuckerberg will not appear before MPs

    Then in an interview with MSNBC and Recode on 28 March, Mr Cook said: “I think the best regulation is no regulation, is self-regulation. However, I think we’re beyond that here.”

    During this second appearance – which has yet to be broadcast in full – he added: “We could make a tonne of money if we monetised our customer, if our customer was our product. We’ve elected not to do that… Privacy to us is a human right.”

    Apple makes most of its profits from selling smartphones, tablets and other computers, as well as associated services such as online storage and its various media stores.

    This contrasts with other tech firms whose profits are largely derived from advertising, including Google, Twitter and Facebook.

    Mr Zuckerberg had previously told CNN that he was “open” to new regulations.

    But he defended his business model when questioned about Mr Cook’s views, although he mentioned neither Apple nor its leader by name.

    “I find that argument, that if you’re not paying that somehow we can’t care about you, to be extremely glib and not at all aligned with the truth,” he said.

    “The reality here is that if you want to build a service that helps connect everyone in the world, then there are a lot of people who can’t afford to pay.”

    He added: “I think it’s important that we don’t all get Stockholm syndrome and let the companies that work hard to charge you more convince you that they actually care more about you, because that sounds ridiculous to me.”

    Mr Zuckerberg also defended his leadership by invoking Amazon’s chief executive.

    “I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business,” he said.

    “I thought Jeff Bezos had an excellent saying: “There are companies that work hard to charge you more, and there are companies that work hard to charge you less.”

    ‘Turned into a beast’

    Elsewhere in the 49-minute interview, Mr Zuckerberg said he hoped to make Facebook more “democratic” by giving members a chance to challenge decisions its own review team had taken about what content to permit or ban.

    Eventually, he said, he wanted something like the “Supreme Court”, in which people who did not work for the company made the ultimate call on what was acceptable speech.

    Mr Zuckerberg also responded to recent criticism from a UN probe into allegations of genocide against the Rohingya Muslims in Myanmar.

    Last month, one of the human rights investigators said Facebook had “turned into a beast” and had “played a determining role” in stirring up hatred against the group.

    Mr Zuckerberg claimed messages had been sent “to each side of the conflict” via Facebook Messenger, attempting to make them go to the same locations to fight.

    But he added that the firm had now set up systems to detect such activity.

    “We stop those messages from going through,” he added.

    “But this is certainly something that we’re paying a lot of attention to.”

Killer robots: Experts warn of ‘third revolution in warfare’

More than 100 leading robotics experts are urging the United Nations to take action in order to prevent the development of “killer robots”.

In a letter to the organisation, artificial intelligence (AI) leaders, including billionaire Elon Musk, warn of “a third revolution in warfare”.

The letter says “lethal autonomous” technology is a “Pandora’s box”, adding that time is of the essence.

The 116 experts are calling for a ban on the use of AI in managing weaponry.

“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the letter says.

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways,” it adds.

There is an urgent tone to the message from the technology leaders, who warn that “we do not have long to act”.

“Once this Pandora’s box is opened, it will be hard to close.”

  • Warning against AI arms race
  • Soldiers that never sleep
  • AI fighter pilot wins in combat

    Experts are calling for what they describe as “morally wrong” technology to be added to the list of weapons banned under the UN Convention on Certain Conventional Weapons (CCW).

    Along with Tesla co-founder and chief executive Mr Musk, the technology leaders include Mustafa Suleyman, Google’s DeepMind co-founder.

    A UN group focusing on autonomous weaponry was scheduled to meet on Monday but the meeting has been postponed until November, according to the group’s website.

    A potential ban on the development of “killer robot” technology has previously been discussed by UN committees.

    In 2015, more than 1,000 tech experts, scientists and researchers wrote a letter warning about the dangers of autonomous weaponry.

    Among the signatories of the 2015 letter were scientist Stephen Hawking, Apple co-founder Steve Wozniak and Mr Musk.

    What is a ‘killer robot’?

    A killer robot is a fully autonomous weapon that can select and engage targets without human intervention. They do not currently exist but advances in technology are bringing them closer to reality.

    Those in favour of killer robots believe the current laws of war may be sufficient to address any problems that might emerge if they are ever deployed, arguing that a moratorium, not an outright ban, should be called if this is not the case.

    However, those who oppose their use believe they are a threat to humanity and any autonomous “kill functions” should be banned.

    Get news from the BBC in your inbox, each weekday morning

    View comments

Microsoft gambles on a quantum leap in computing

In a laboratory in Copenhagen, scientists believe they are on the verge of a breakthrough that could transform computing.

A team combining Microsoft researchers and Niels Bohr Institute academics is confident that it has found the key to creating a quantum computer.

If they are right, then Microsoft will leap to the front of a race that has a tremendous prize – the power to solve problems that are beyond conventional computers.

In the lab are a series of white cylinders, which are fridges, cooled almost to absolute zero as part of the process of creating a qubit, the building block of a quantum computer.

“This is colder than deep space, it may be the coldest place in the universe,” Prof Charlie Marcus tells me.

The team he leads is working in collaboration with other labs in the Netherlands, Australia and the United States in Microsoft’s quantum research programme.

Right now, they are behind in the race – the likes of Google, IBM and a Silicon Valley start-up called Rigetti have already shown they can build systems with as many as 50 qubits. Microsoft has yet to demonstrate – in public at least – that it can build one.

But these scientists are going down a different route from their rivals, trying to create qubits using a subatomic particle, whose existence was first suggested back in the 1930s by an Italian physicist Ettore Majorana.

This week scientists from Microsoft’s laboratory in Delft published a paper in the journal Nature outlining the progress they had made in isolating the Majorana particle.

Their belief is that this will lead to a much more stable qubit than the methods their rivals are using, which are highly prone to errors. That should mean scaling up to a fully operational quantum computer will be far easier.

At the Copenhagen lab they showed me through a powerful microscope the tiny wire where they have created these Majorana particles. Later over dinner, Prof Charlie Marcus tried, not altogether successfully, to demonstrate to someone whose last physics exam was more than 40 years ago what was unique about this approach with three pieces of bread and some cutlery.

“What’s really astounding with this activity compared with what everybody else is doing is that we have to invent a particle that’s never existed before and then use it for computing,” he explains.

“It’s a profoundly more exotic challenge than what’s going on with other approaches to quantum computing.”

Other scientists taking those other approaches are looking on with great interest and a little scepticism.

“It’s one of those things that on paper look incredibly exciting but physics has a habit of throwing up spanners in the works,” says University College London’s Prof John Morton, whose research involves using good old fashioned silicon to build qubits.

“Until we see the demonstration we don’t know how well these Majorana qubits developed by Microsoft will really behave.”

He says this is a big year for the field, with the strong likelihood that Google or IBM will demonstrate what is known as quantum supremacy, where a problem that is beyond a conventional supercomputer is solved using quantum methods.

But Microsoft seems confident that its years of research will soon pay off.

“We will have a commercially relevant quantum computer – one that’s solving real problems – within five years,” says Dr Julie Love, Microsoft’s director of quantum computing business development.

She is already out selling the company’s customers a vision of a near future where quantum computers will help battle climate change, create new superconducting materials and super-charge machine learning.

“What it allows us to do is solve problems that with all of our supercomputers running in parallel would take the lifetime of the universe to solve in seconds, hours or days.”

So, the heat is on for the research team. Prof Charlie Marcus, who spent most of his career at Harvard before being recruited to run the Copenhagen lab, says his life has been about creating knowledge, not building products.

“My job is to find out what works and hand it off to the engineers and computer scientists who will turn it into a technology.”

Heading up the whole programme is Todd Holmdahl, the Microsoft executive previously in charge of the Hololens mixed reality headset and the Xbox games console – a measure of how serious the company is about making some quantum hardware pretty soon.

I pressed Prof Marcus on whether his team was going to hit that ambitious five year target set by his employer.

“We’re sure going to try,” he says with a grin.

South Korean university boycotted over ‘killer robots’

Leading AI experts have boycotted a South Korean university over a partnership with weapons manufacturer Hanwha Systems.

More than 50 AI researchers from 30 countries signed a letter expressing concern about its plans to develop artificial intelligence for weapons.

In response, the university said it would not be developing “autonomous lethal weapons”.

The boycott comes ahead of a UN meeting to discuss killer robots.

Shin Sung-chul, president of the Korea Advanced Institute of Science and Technology (Kaist), said: “I reaffirm once again that Kaist will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control.

“Kaist is significantly aware of ethical concerns in the application of all technologies including artificial intelligence.”

He went on to explain that the university’s project was centred on developing algorithms for “efficient logistical systems, unmanned navigation and aviation training systems”.

  • Terrorists ‘certain’ to get killer robots, says defence giant
  • Killer robots: Experts warn of ‘third revolution in warfare’

    Prof Noel Sharkey, who heads the Campaign to Stop Killer Robots, was one of the first to sign the letter and welcomed the university’s response.

    “We received a letter from the president of Kaist making it clear that they would not help in the development of autonomous weapons systems.

    “The signatories of the letter will need a little time to discuss the relationship between Kaist and Hanwha before lifting the boycott,” he added.

    Until the boycott is lifted, academics will refuse to collaborate with any part of Kaist.

    Pandora’s box

    Next week in Geneva, 123 member nations of the UN will discuss the challenges posed by lethal autonomous weapons, or killer robots, with 22 of these nations calling for an outright ban on such weapons.

    “At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like Kaist looks to accelerate the arms race to develop such weapons,” read the letter sent to Kaist, announcing the boycott.

    “If developed, autonomous weapons will be the third revolution in warfare. They will permit war to be fought faster and at a scale greater than ever before. They have the potential to be weapons of terror.

    “Despots and terrorists could use them against innocent populations, removing any ethical restraints. This Pandora’s box will be hard to close if it is opened.”

    South Korea already has an army of robots patrolling the border with North Korea. The Samsung SGR-A1 carries a machine gun that can be switched to autonomous mode but is, at present, operated by humans via camera links.

Zuckerberg: I’m still the man to run Facebook

Despite the turmoil that continues to surround his company, Mark Zuckerberg has insisted he is still the best person to lead Facebook.

“When you’re building something like Facebook which is unprecedented in the world,” he said on Wednesday, “there are things that you’re going to mess up.

“What I think people should hold us accountable for is if we are learning from our mistakes.”

As well as being Facebook’s chief executive, Mr Zuckerberg is chairman of the company’s board. When asked if his position had been discussed, he replied: “Not that I know of!”

The mere possibility that his leadership is in question is a scenario few would have predicted even a month ago.

But recent reports around improper data gathering by third parties – as well as fake news and propaganda – have prompted some to question Mr Zuckerberg’s ability to lead a company that some think has grown beyond his control.

  • Facebook scandal ‘hit 87 million users’
  • Zuckerberg to testify before US committee
  • Facebook chief fires back at Apple’s Tim Cook
  • Facebook haunted by ‘ugly truth’ memo

    ‘By design, he can’t be fired – he can only resign’

    Scott Stringer, head of New York City’s pension fund, said this week that Mr Zuckerberg should step aside. The fund owns approximately $1bn-worth of the social network.

    “They have two billion users,” Mr Stringer told CNBC.

    “They are in uncharted waters, and they have not comported themselves in a way that makes people feel good about Facebook and secure about their own data.”

    A piece in technology magazine Wired called for Mr Zuckerberg to step down in order to let Facebook start a “reputation-enhancing second chapter”.

    “He doesn’t just lead an institution that touches almost every person on the planet,” wrote Felix Salmon.

    “He also, thanks to financial engineering, has a majority of shareholder votes and controls the board, and is therefore answerable to no one.

    “By design, he can’t be fired – he can only resign. Which is exactly what he should now do.”

    ‘A man often criticised as lacking empathy’

    Mr Zuckerberg’s conference call went as well as the 33-year-old could have expected.

    Indeed, at one point he encouraged more time to take more questions.

    From his answers we learned a little more about the real toll of the negative publicity and the “deleteFacebook” movement. And so far the answer is: not much.

    There has been “no meaningful impact that we’ve observed” he said, before quickly adding: “But look, it’s not good!”

    What we couldn’t tell during the call, of course, was to what extent Mr Zuckerberg was being quietly guided by his team in the room.

    But for a man often criticised as lacking empathy, it was a strong display lasting almost an hour. Investors certainly thought so – shares were up 3% once the call ended.

    Next week he will face a potentially tougher prospect, this time in front of the cameras, when he heads to Washington to testify before Congress.

    • Cambridge Analytica: The story so far

      Indeed, this session with the press was perhaps the ideal dress rehearsal.

      The dynamic around Mr Zuckerberg’s leadership could change dramatically in the coming months, as investigations – most notably from the Federal Trade Commission (FTC) – begin to probe deeper into how Facebook handled the public’s data.

      If the company is seen to have fallen short of its responsibility, and is hit with a potentially enormous fine, it could increase pressure on Facebook to make serious personnel changes.

      So far, despite all of the apologies and admissions of poor judgement, Mr Zuckerberg told reporters that not a single person at the company had been fired over the Cambridge Analytica fiasco.

      The buck stops with him, he said – and indeed it might.

      View comments

Facebook scandal ‘hit 87 million users’

Facebook believes the data of up to 87 million people was improperly shared with the political consultancy Cambridge Analytica – many more than previously disclosed.

The BBC has been told that about 1.1 million of them are UK-based.

The overall figure had been previously quoted as being 50 million by the whistleblower Christopher Wylie.

Facebook chief Mark Zuckerberg said “clearly we should have done more, and we will going forward”.

  • Zuckerberg: I’m still the man to lead Facebook

    During a press conference he said that he had previously assumed that if Facebook gave people tools, it was largely their responsibility to decide how to use them.

    The latest revelations came several hours after the US House Commerce Committee announced that Facebook’s founder, Mark Zuckerberg, would testify before it on 11 April.

    Facebook’s share price has dropped sharply in the weeks since the allegations emerged.

    Wide-ranging changes

    In his Wednesday blog post, Mr Schroepfer detailed new steps being taken by Facebook in the wake of the scandal.

    They include:

    • a decision to stop third-party apps seeing who is on the guest lists of Events pages and the contents of messages posted on them
    • a commitment to only hold call and text history logs collected by the Android versions of Messenger and Facebook Lite for a year. In addition, Facebook said the logs would no longer include the time of the calls
    • a link will appear at the top of users’ News Feeds next week, prompting them to review the third-party apps they use on Facebook and what information is shared as a consequence

      Facebook has also published proposed new versions of its terms of service and data use policy.

      The documents are longer than the existing editions in order to make the language clearer and more descriptive.

      Tinder users affected

      Another change the company announced involved limiting the type of information that can be accessed by third-party applications.

      Immediately after the changes were announced, however, users of the widely popular dating app Tinder were hit by login errors, leaving them unable to use the service.

      Skip Twitter post by @Tinder

      A technical issue is preventing users from logging into Tinder. We apologize for the inconvenience and are working to have everyone swiping again soon.

      — Tinder (@Tinder) April 4, 2018


      End of Twitter post by @Tinder

      Tinder relies on Facebook to manage its logins. Users reported that they had been signed out of the app and were unable to log in again.

      Instead, the app repeatedly asks for more permissions to access a user’s Facebook profile information. Many were quick to link the outage to the changes announced by Facebook.

      Skip Twitter post by @CaseyNewton

      Y'all I just checked on my account and this is real. Facebook just broke Tinder. This is about to be America's loneliest Wednesday night in several years

      — Casey Newton (@CaseyNewton) April 4, 2018


      End of Twitter post by @CaseyNewton

      Fake news

      The Cambridge Analytica scandal follows earlier controversies about “fake news” and evidence that Russia tried to influence US voters via Facebook.

      Mr Zuckerberg has declined to answer questions from British MPs.

      When asked about this by the BBC, he said he had decided that his chief technology officer and chief product officer should answer questions from countries other than the US.

      He added, however, that he had made a mistake in 2016 by dismissing the notion that fake news had influenced the US Presidential election.

      “People will analyse the actual impact of this for a long time to come,” he added.

      “But what I think is clear at this point is that it was too flippant and I should never have referred to it as crazy.”