‘More than 600 apps had access to my iPhone data’

While Facebook desperately tightens controls over how third parties access its users’ data – trying to mend its damaged reputation – attention is focusing on the wider issue of data harvesting and the threat it poses to our personal privacy.

Data harvesting is a multibillion dollar industry and the sobering truth is that you may never know just how much data companies hold about you, or how to delete it.

That’s the startling conclusion drawn by some privacy campaigners and technology companies.

“Thousands of companies are in the business of harvesting your data and tracking your online behaviour,” says Frederike Kaltheuner, data programme lead for lobby group Privacy International.

“It’s a global business. And not just online, but offline, too, via loyalty cards and wi-fi tracking of your mobile. It’s almost impossible to know what’s happening to your data.”

The really big data brokers – firms such as Acxiom, Experian, Quantium, Corelogic, eBureau, ID Analytics – can hold as many as 3,000 data points on every consumer, says the US Federal Trade Commission.

Ms Kaltheuner says more than 600 apps have had access to her iPhone data over the last six years. So she’s taken on the onerous task of finding out exactly what these apps know about her.

“It could take a year,” she says, because it involves poring over every privacy policy then contacting the app provider to ask them. And not taking “no” for an answer.

Not only is it difficult to know what data is out there, it is also difficult to know how accurate it is.

“They got my income totally wrong, they got my marital status wrong,” says Pamela Dixon, executive director of the World Privacy Forum, another privacy rights lobby group.

She was examining her record with one of the merchants that scoop up and sell data on individuals around the globe.

She found herself listed as a computer enthusiast – “which is a bit annoying, I’m not running around buying computers every day” – and as a runner, though she’s a cyclist.

Susan Bidel, senior analyst at Forrester Research in New York, who covers data brokers, says a common belief in the industry is that only “50% of this data is accurate”.

So why does any of this matter?

Because this “ridiculous marketing data”, as Ms Dixon calls it, is now determining life chances.

Consumer data – our likes, dislikes, buying behaviour, income level, leisure pursuits, personalities and so on – certainly helps brands target their advertising dollars more effectively.

But its main use “is to reduce risk of one kind or another, not to target ads,” believes John Deighton, a professor at Harvard Business School who writes on the industry.

We’re all given credit scores these days.

If the information flatters you, your credit cards and mortgages will be much cheaper, and you will pass employment background checks more easily, says Prof Deighton.

But these scores may not only be inaccurate, they may be discriminatory, hiding information about race, marital status, and religion, says Ms Dixon.

“An individual may never realize that he or she did not receive an interview, job, discount, premium, coupon, or opportunity due to a low score,” the World Privacy Forum concludes in a report.

Collecting consumer data has been going on for as long as companies have been trying to sell us stuff.

As far back as 1841, Dun & Bradstreet collected credit information and gossip on possible credit-seekers. In the 1970s, list brokers offered magnetic tapes containing data on a bewildering array of groups: holders of fishing licences, magazine subscribers, or people likely to inherit wealth.

But nowadays, the sheer scale of online data has swamped the traditional offline census and voter registration data.

Much of this data is aggregated and anonymised, but much of it isn’t. And many of us have little or no idea how much data we’re sharing, often because we agree to online terms and conditions without reading them. Perhaps understandably.

Two researchers at Carnegie Mellon University in the US worked out that if you were to read every privacy policy you came across online, it would take you 76 days, reading eight hours a day.

And anyway, having to do this “shouldn’t be a citizen’s job”, argues Frederike Kaltheuner, “Companies should have to protect our data as a default.”

Rashmi Knowles from security firm RSA points out that it’s not just data harvesters and advertisers who are in the market for our data.

  • How to protect your Facebook data
  • Facebook suspends Brexit data firm
  • Facebook to warn users in data scandal

    “Often hackers can answer your security question answers – things like date of birth, mother’s maiden name, and so on – because you have shared this information in the public domain,” she says.

    “You would be amazed how easy it is to piece together a fairly accurate profile from just a few snippets of information, and this information can be used for identity theft.”

    So how can we take control of our data?  

    There are ways we can restrict the amount of data we share with third parties – changing browser settings to block cookies, for example, using ad-blocking software, browsing “incognito” or using virtual private networks.

    More Technology of Business

    • Meet the gargantuan air freighter that looks like a whale
    • Reaping the wind with the biggest turbines ever made
    • Making deliveries in a badly mapped world
    • Meet the female ‘artpreneur’ making a splash online
    • Virtual reality as sharp as the human eye can see?

      And search engines like DuckDuckGo limit the amount of information they reveal to online tracking systems.

      But StJohn Deakins, founder and chief executive of marketing firm CitizenMe, believes consumers should be given the ability to control and monetise their data.

      On his app, consumers take personality tests and quizzes voluntarily, then share that data anonymously with brands looking to buy more accurate marketing data to inform their advertising campaigns.

      “Your data is much more compelling and valuable if it comes from you willingly in real time. You can outcompete the data brokers,” he says.

      “Some of our 80,000 users around the world are making £8 a month or donating any money earned to charities,” says Mr Deakins.

      Brands – from German car makers to big retailers – are looking to source data “in an ethical way”, he says.

      “We need to make the marketplace for data much more transparent.”

      • Follow Technology of Business editor Matthew Wall on Twitter and Facebook
        • Click here for more Technology of Business features

The man with (almost) no data trail

No Facebook account. No Twitter. No Instagram.

No smartphone. No tablet. No online banking.

Just an email account accessed at the local library and a chunky Nokia 3210 with a built-in torch.

Felix, not his real name (as you might expect…), lives without the tools and social media accounts that are woven into most of our lives.

For many of us it’s a love-hate relationship – enjoying the regular social contact with friends and family and the efficiencies but lamenting the banality of much of it and the hours it sucks up.

And most recently there’s the matter of the digital trail we leave behind us, the breadcrumbs that social media companies gather up and sell, as we lose track of who knows what about our movements, our needs, our impulses and behaviours.

Felix, a 33-year-old gardener, has been swimming against the tide for years.

In 2018, it may sound like a staunch political statement to veer away from technology and the internet, he says, but, in truth, he just never fancied it.

As new technology emerged and became mainstream, Felix wasn’t drawn to it.

“They weren’t useful to me. I got along without them, like playing the trombone,” he says.

But the world spun several times and a couple of decades later Felix now finds himself something of an anomaly.

People treat him with a sort of low-key admiration and slight bemusement, he says, and when new people see his phone for the first time “they crack up”.

“Most people think if you can live your life that way, good on you. But most wouldn’t want to live like this,” he says.

And don’t write off Felix as someone with little knowledge of the modern world – he is aware of today’s technology used by others his age.

“I would never say you should throw your Alexa in the bin,” he says.

“But it is easier to have a natural human engagement with the world and other people without layers of technology interfering with that.”

  • How to protect your Facebook data
  • If I’ve got your number, so has Facebook
  • Facebook data – are users to blame?

    He does use the internet at the library, going online about twice a week for an hour at a time.

    Typically he’ll work through a list of admin tasks, searching for numbers, addresses, or finding out about a new band – music is his passion. Rarely will he venture onto Facebook or Twitter.

    “When Facebook came out I was interested that it was becoming so popular. So I had a look at a friend’s profile to see the shape of it – that was enough for me. I didn’t touch it for 10 years,” he says.

    Now he might look up a public event advertised on it or scroll through Twitter, albeit without feeling the need to create his own account.

    Asked what he does with all the time he saves by avoiding social media, he laughs and calls it a funny question.

    “Social media is not a fundamental human need. I’m just not sure people were wandering round in 1995 thinking it’s a crying shame I don’t know what Kim Kardashian had for lunch.”

    There are no computers at his family home in Kent, where he lives with his parents, and no tablets or Netflix, just a Freeview box and a TV.

    “I don’t have a hunger for the new thing.

    “If I don’t see Game of Thrones, I assume it will be around for a long time. It diffuses the immediate requirement to gobble things up,” he says.

    But he is on the electoral register and his home number is in the phone book. This type of public data bothers Felix less.

    “It’s older, more firmly established. It was not driven by a thirst for monetising thoughts and personality. It was more expressly an exchange,” he says.

    He also carries an Oyster travelcard, meaning authorities could track his movements around the capital.

    “I don’t love the fact that someone could find out where I’m travelling to in London, but my thinking isn’t fuelled by paranoia.”

    Being traceable is an unpleasant symptom of life today, he says, and if he can avoid his life being publicly available, he will.

    Felix says he expects to see a few more people leave Facebook and chop up their credit cards as a sort of political statement but believes the uptake is so complete and the digital footprint is so deep, most of us are welded to it – it’s how we process our world.

    At times, this interview has been uncomfortable for Felix, who asked us not to share his real name.

    “I am quite far away from having a digital identity,” he says. “It’s quite a big step for me to have something documented for me online.”

    But he says he can understand the interest in his own non-digital stance, given the recent attention around Facebook and what it does with our data.

    News that the social network users’ data is harvested and sold without explicit permission did not come as a surprise to Felix but he does find it distasteful that Facebook packages itself with a friendly face as though it’s more than just a business.

    Does he feel relieved that his own data was never at risk?

    “There is a small part of me that thinks it’s nice I don’t have to have that on my mind,” he replies.

    Felix says he has no intention of changing tack, despite everyone around him, including his older brother, having “normal” attitudes towards technology.

    But, surely, there must be something he feels he is missing out on? Breaking news updates, social gossip, looking through pictures of events he has attended?

    Finally, he cracks… but just a little.

    “I’m indifferent to pictures of that gig, that dinner we went to, but I do have a pang about the wedding pictures I might miss seeing. I don’t get to see them because no hard copies exist of them.”

    But is that enough incentive for Felix to get a Facebook account, to surrender to a world of likes and random friend requests. Still his answer remains an emphatic “no”.

Facebook ‘ugly truth’ growth memo haunts firm

A Facebook executive’s memo that claimed the “ugly truth” was that anything it did to grow was justified has been made public, embarrassing the company.

The 2016 post said that this applied even if it meant people might die as a result of bullying or terrorism.

Both the author and the company’s chief executive, Mark Zuckerberg, have denied they actually believe the sentiment.

But it risks overshadowing the firm’s efforts to tackle an earlier scandal.

Facebook has been under intense scrutiny since it acknowledged that it had received reports that a political consultancy – Cambridge Analytica – had not destroyed data harvested from about 50 million of its users years earlier.

The memo was first made public by the Buzzfeed news site, and was written by Andrew Bosworth.

The 18 June 2016 memo:

So we connect more people.

That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack co-ordinated on our tools.

And still we connect people.

The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.


That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.

Read the post in full

Mr Bosworth – who co-invented Facebook’s News Feed – has held high-level posts at the social network since 2006, and is currently in charge of its virtual reality efforts.

Mr Bosworth has since tweeted that he “didn’t agree” with the post at the time he had posted it, but he had shared it with staff to be “provocative.”

“Having a debate around hard topics like these is a critical part of our process and to do that effectively we have to be able to consider even bad ideas,” he added.

  • Facebook privacy settings revamped after scandal
  • Cambridge Analytica files spell out election tactics
  • Data row: Facebook’s Zuckerberg will not appear before MPs

    Mark Zuckerberg has issued his own statement.

    “Boz is a talented leader who says many provocative things,” it said.

    “This was one that most people at Facebook including myself disagreed with strongly. We’ve never believed the ends justify the means.”

    A follow-up report by the Verge revealed that dozens of Facebook’s employees have subsequently used its internal chat tools to discuss concerns that such material had been leaked to the media.


    By Rory Cellan-Jones, Technology correspondent

    What immediately struck me about this leaked memo was the line about “all the questionable contact importing practices”.

    When I downloaded my Facebook data recently, it was the presence of thousands of my phone contacts that startled me. But the company’s attitude seemed to be that this was normal and it was up to users to switch off the function if they didn’t like it.

    What we now know is that in 2016 a very senior executive thought this kind of data gathering was questionable.

    So, why is it only now that the company is having a debate about this and other dubious practices?

    Until now, Facebook has not been leaky. Perhaps we will soon get more insights from insiders as this adolescent business tries to grow up and come to terms with its true nature.

    Fact checking

    The disclosure coincided with Facebook’s latest efforts to address the public and investors’ concerns with its management.

    Its shares are trading about 14% lower than they were before the Cambridge Analytica scandal began, and several high profile figures have advocated deleting Facebook accounts.

    The company hosted a press conference on Thursday, at which it said it had:

    • begun fact-checking photos and videos posted in France, and would expand this to other countries soon
    • developed a new fake account investigative tool to prevent harmful election-related activities
    • started work on a public archive that will make it possible for journalists and others to investigate political-labelled ads posted to its platform

      In previous days it had also announced a revamp of its privacy settings, and said it would restrict the amount of data exchanged with businesses that collect information on behalf of advertisers.

      The latest controversy is likely, however, to provide added ammunition for critics.

      CNN reported earlier this week that Mr Zuckerberg had decided to testify before Congress “within a matter of weeks” after refusing a request to do so before UK MPs. However, the BBC has been unable to independently verify whether he answer questions in Washington.

Facebook’s Zuckerberg fires back at Apple’s Tim Cook

Facebook’s chief executive has defended his leadership following criticism from his counterpart at Apple.

Mark Zuckerberg said it was “extremely glib” to suggest that because the public did not pay to use Facebook that the firm did not care about them.

Last week, Apple’s Tim Cook said it was an “invasion of privacy” to traffic in users’ personal lives.

And when asked what he would do if he were Mr Zuckerberg, Mr Cook replied: “I wouldn’t be in that situation.”

Facebook has faced intense criticism after it emerged that it had known for years that Cambridge Analytica had harvested data from about 50 million of its users, but had relied on the political consultancy to self-certify that it had deleted the information.

Channel 4 News has since reported that at least some of the data in question is still in circulation despite Cambridge Analytica insisting it had destroyed the material.

Mr Zuckerberg was asked about Mr Cook’s comments during a lengthy interview given to news site Vox about the privacy scandal.

He also acknowledged that Facebook was still not transparent enough about some of the choices it had taken, and floated the idea of an independent panel being able to override some of its decisions.

‘Dire situation’

Mr Cook has spoken in public twice since Facebook’s data-mining controversy began.

On 23 March, he took part in the China Development Forum in Beijing.

“I think that this certain situation is so dire and has become so large that probably some well-crafted regulation is necessary,” news agency Bloomberg quoted him as saying in response to a question about the social network’s problems.

“The ability of anyone to know what you’ve been browsing about for years, who your contacts are, who their contacts are, things you like and dislike and every intimate detail of your life – from my own point of view it shouldn’t exist.”

  • Facebook haunted by ‘ugly truth’ memo
  • Facebook privacy settings revamped after scandal
  • Zuckerberg will not appear before MPs

    Then in an interview with MSNBC and Recode on 28 March, Mr Cook said: “I think the best regulation is no regulation, is self-regulation. However, I think we’re beyond that here.”

    During this second appearance – which has yet to be broadcast in full – he added: “We could make a tonne of money if we monetised our customer, if our customer was our product. We’ve elected not to do that… Privacy to us is a human right.”

    Apple makes most of its profits from selling smartphones, tablets and other computers, as well as associated services such as online storage and its various media stores.

    This contrasts with other tech firms whose profits are largely derived from advertising, including Google, Twitter and Facebook.

    Mr Zuckerberg had previously told CNN that he was “open” to new regulations.

    But he defended his business model when questioned about Mr Cook’s views, although he mentioned neither Apple nor its leader by name.

    “I find that argument, that if you’re not paying that somehow we can’t care about you, to be extremely glib and not at all aligned with the truth,” he said.

    “The reality here is that if you want to build a service that helps connect everyone in the world, then there are a lot of people who can’t afford to pay.”

    He added: “I think it’s important that we don’t all get Stockholm syndrome and let the companies that work hard to charge you more convince you that they actually care more about you, because that sounds ridiculous to me.”

    Mr Zuckerberg also defended his leadership by invoking Amazon’s chief executive.

    “I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business,” he said.

    “I thought Jeff Bezos had an excellent saying: “There are companies that work hard to charge you more, and there are companies that work hard to charge you less.”

    ‘Turned into a beast’

    Elsewhere in the 49-minute interview, Mr Zuckerberg said he hoped to make Facebook more “democratic” by giving members a chance to challenge decisions its own review team had taken about what content to permit or ban.

    Eventually, he said, he wanted something like the “Supreme Court”, in which people who did not work for the company made the ultimate call on what was acceptable speech.

    Mr Zuckerberg also responded to recent criticism from a UN probe into allegations of genocide against the Rohingya Muslims in Myanmar.

    Last month, one of the human rights investigators said Facebook had “turned into a beast” and had “played a determining role” in stirring up hatred against the group.

    Mr Zuckerberg claimed messages had been sent “to each side of the conflict” via Facebook Messenger, attempting to make them go to the same locations to fight.

    But he added that the firm had now set up systems to detect such activity.

    “We stop those messages from going through,” he added.

    “But this is certainly something that we’re paying a lot of attention to.”

Facebook’s Zuckerberg to testify before US committee

Facebook’s chief executive Mark Zuckerberg is to testify before the US House Commerce Committee regarding the firm’s use and protection of user data.

Facebook has faced criticism after it emerged it had known for years that Cambridge Analytica had harvested data from about 50 million of its users.

He will testify before the committee on Wednesday, 11 April.

Committee chairman Greg Walden and member Frank Pallone welcomed the decision by Mr Zuckerberg.

“This hearing will be an important opportunity to shed light on critical consumer data privacy issues and help all Americans better understand what happens to their personal information online,” the pair said.

Facebook is facing scrutiny over its data collection following allegations that Cambridge Analytica, a political consulting firm, obtained data on tens of millions of Facebook users to try to influence elections.

Cambridge Analytica worked for US President Donald Trump’s campaign.

The company, funded in part by Trump supporter and billionaire financier Robert Mercer, paired consumer data with voter information.

  • Cambridge Analytica files spell out election tactics
  • Cambridge Analytica: The story so far
  • Facebook privacy settings revamped after scandal

    Cambridge Analytica gathered the data through a personality test app, called This Is Your Digital Life, that was downloaded by fewer than 200,000 people.

    However, the app gave researchers access to the profiles of participants’ Facebook friends, allowing them to collect data from millions more users.

    Mr Walden and Mr Pallone said last month that they wanted to hear directly from Mr Zuckerberg after senior Facebook executives failed to answer questions during a private briefing with congressional staff about how Facebook and third-party developers use and protect consumer data.

    Facebook has also published new versions of its terms of service and data use policy.

    The firm said the documents were longer than the previous versions in order to make their language clearer and more descriptive.

    The data policy now states: “We don’t sell any of your information to anyone, and we never will.”

    However, this does not prevent the firm from using the data to let advertisers target their promotions. It will also continue to share anonymised analytics and insights with third-parties.

    Facebook will now carry out a week-long consultation before finalising the text and adopting it.

    ‘Breach of trust’

    Facebook, which has two billion users, is now one of the main ways politicians connect with voters. It has been looking to repair its public image and restore users’ trust since the Cambridge Analytica scandal emerged.

    Facebook said last month that it had hired forensic auditors to examine if Cambridge Analytica still had the data.

    Mr Zuckerberg has apologised for a “breach of trust”, and taken out full-page advertisements in several UK and US Sunday newspapers.

    He has also said he welcomes more regulation.

    The US Senate commerce and judiciary committees also have requested that Mr Zuckerberg appear in front of them.

    And the US Federal Trade Commission is investigating whether Facebook engaged in unfair acts that caused substantial injury to consumers.

Zuckerberg: I’m still the man to run Facebook

Despite the turmoil that continues to surround his company, Mark Zuckerberg has insisted he is still the best person to lead Facebook.

“When you’re building something like Facebook which is unprecedented in the world,” he said on Wednesday, “there are things that you’re going to mess up.

“What I think people should hold us accountable for is if we are learning from our mistakes.”

As well as being Facebook’s chief executive, Mr Zuckerberg is chairman of the company’s board. When asked if his position had been discussed, he replied: “Not that I know of!”

The mere possibility that his leadership is in question is a scenario few would have predicted even a month ago.

But recent reports around improper data gathering by third parties – as well as fake news and propaganda – have prompted some to question Mr Zuckerberg’s ability to lead a company that some think has grown beyond his control.

  • Facebook scandal ‘hit 87 million users’
  • Zuckerberg to testify before US committee
  • Facebook chief fires back at Apple’s Tim Cook
  • Facebook haunted by ‘ugly truth’ memo

    ‘By design, he can’t be fired – he can only resign’

    Scott Stringer, head of New York City’s pension fund, said this week that Mr Zuckerberg should step aside. The fund owns approximately $1bn-worth of the social network.

    “They have two billion users,” Mr Stringer told CNBC.

    “They are in uncharted waters, and they have not comported themselves in a way that makes people feel good about Facebook and secure about their own data.”

    A piece in technology magazine Wired called for Mr Zuckerberg to step down in order to let Facebook start a “reputation-enhancing second chapter”.

    “He doesn’t just lead an institution that touches almost every person on the planet,” wrote Felix Salmon.

    “He also, thanks to financial engineering, has a majority of shareholder votes and controls the board, and is therefore answerable to no one.

    “By design, he can’t be fired – he can only resign. Which is exactly what he should now do.”

    ‘A man often criticised as lacking empathy’

    Mr Zuckerberg’s conference call went as well as the 33-year-old could have expected.

    Indeed, at one point he encouraged more time to take more questions.

    From his answers we learned a little more about the real toll of the negative publicity and the “deleteFacebook” movement. And so far the answer is: not much.

    There has been “no meaningful impact that we’ve observed” he said, before quickly adding: “But look, it’s not good!”

    What we couldn’t tell during the call, of course, was to what extent Mr Zuckerberg was being quietly guided by his team in the room.

    But for a man often criticised as lacking empathy, it was a strong display lasting almost an hour. Investors certainly thought so – shares were up 3% once the call ended.

    Next week he will face a potentially tougher prospect, this time in front of the cameras, when he heads to Washington to testify before Congress.

    • Cambridge Analytica: The story so far

      Indeed, this session with the press was perhaps the ideal dress rehearsal.

      The dynamic around Mr Zuckerberg’s leadership could change dramatically in the coming months, as investigations – most notably from the Federal Trade Commission (FTC) – begin to probe deeper into how Facebook handled the public’s data.

      If the company is seen to have fallen short of its responsibility, and is hit with a potentially enormous fine, it could increase pressure on Facebook to make serious personnel changes.

      So far, despite all of the apologies and admissions of poor judgement, Mr Zuckerberg told reporters that not a single person at the company had been fired over the Cambridge Analytica fiasco.

      The buck stops with him, he said – and indeed it might.

      View comments

Facebook scandal ‘hit 87 million users’

Facebook believes the data of up to 87 million people was improperly shared with the political consultancy Cambridge Analytica – many more than previously disclosed.

The BBC has been told that about 1.1 million of them are UK-based.

The overall figure had been previously quoted as being 50 million by the whistleblower Christopher Wylie.

Facebook chief Mark Zuckerberg said “clearly we should have done more, and we will going forward”.

  • Zuckerberg: I’m still the man to lead Facebook

    During a press conference he said that he had previously assumed that if Facebook gave people tools, it was largely their responsibility to decide how to use them.

    The latest revelations came several hours after the US House Commerce Committee announced that Facebook’s founder, Mark Zuckerberg, would testify before it on 11 April.

    Facebook’s share price has dropped sharply in the weeks since the allegations emerged.

    Wide-ranging changes

    In his Wednesday blog post, Mr Schroepfer detailed new steps being taken by Facebook in the wake of the scandal.

    They include:

    • a decision to stop third-party apps seeing who is on the guest lists of Events pages and the contents of messages posted on them
    • a commitment to only hold call and text history logs collected by the Android versions of Messenger and Facebook Lite for a year. In addition, Facebook said the logs would no longer include the time of the calls
    • a link will appear at the top of users’ News Feeds next week, prompting them to review the third-party apps they use on Facebook and what information is shared as a consequence

      Facebook has also published proposed new versions of its terms of service and data use policy.

      The documents are longer than the existing editions in order to make the language clearer and more descriptive.

      Tinder users affected

      Another change the company announced involved limiting the type of information that can be accessed by third-party applications.

      Immediately after the changes were announced, however, users of the widely popular dating app Tinder were hit by login errors, leaving them unable to use the service.

      Skip Twitter post by @Tinder

      A technical issue is preventing users from logging into Tinder. We apologize for the inconvenience and are working to have everyone swiping again soon.

      — Tinder (@Tinder) April 4, 2018


      End of Twitter post by @Tinder

      Tinder relies on Facebook to manage its logins. Users reported that they had been signed out of the app and were unable to log in again.

      Instead, the app repeatedly asks for more permissions to access a user’s Facebook profile information. Many were quick to link the outage to the changes announced by Facebook.

      Skip Twitter post by @CaseyNewton

      Y'all I just checked on my account and this is real. Facebook just broke Tinder. This is about to be America's loneliest Wednesday night in several years https://t.co/5KHe763wGY

      — Casey Newton (@CaseyNewton) April 4, 2018


      End of Twitter post by @CaseyNewton

      Fake news

      The Cambridge Analytica scandal follows earlier controversies about “fake news” and evidence that Russia tried to influence US voters via Facebook.

      Mr Zuckerberg has declined to answer questions from British MPs.

      When asked about this by the BBC, he said he had decided that his chief technology officer and chief product officer should answer questions from countries other than the US.

      He added, however, that he had made a mistake in 2016 by dismissing the notion that fake news had influenced the US Presidential election.

      “People will analyse the actual impact of this for a long time to come,” he added.

      “But what I think is clear at this point is that it was too flippant and I should never have referred to it as crazy.”

Scammers abused Facebook phone number search

Facebook was warned by security researchers that attackers could abuse its phone number and email search facility to harvest people’s data.

On Wednesday, the firm said “malicious actors” had been harvesting profiles for years by abusing the search tool.

It said anybody that had not changed their privacy settings after adding their phone number should assume their information had been harvested.

One security expert told the BBC the attack had been possible “for years”.

How did the attack work?

Until Wednesday, Facebook let people search for their friends’ profiles by typing in a phone number or email address.

The company has now disabled the ability to search by phone number.

Outlaw or ignore? How Asia is fighting ‘fake news’

Everybody is talking about it: fake news.

President Trump decries it every time he sees a critical article, the Pope has condemned it, governments are fretting about its influence, holding parliamentary hearings.

And now Malaysia has passed a law criminalising it, with a penalty of up to six years in jail. Yet no-one has defined what it is.

The term first came to prominence during the 2016 US presidential election campaign. But the problem of deliberately falsified news articles, masquerading as properly-researched journalism, goes back centuries.

However, the Malaysian government’s definition in the recently-passed law is far more sweeping than that.

It has criminalised the dissemination of “any news, information, data and reports, which is or are wholly or partly false, whether in the form of features, visuals or audio recordings or in any other form capable of suggesting words or ideas”.

Human rights groups have been quick to point out that this could be used against anyone who makes an error in their reporting or social media posts.

Moreover at least one member of the government has already stated that, when it comes to articles critical of Prime Minister Najib Razak, especially over the notorious 1MDB scandal, where billions of dollars of a government-run investment board are alleged to have been misappropriated, any information not verified as true by the government will be viewed as fake news.

The fact that this law has been rushed through right before what is likely to be a hard-fought general election has raised suspicions that its real purpose is to intimidate government critics.

It is not clear anyway that Malaysia has a serious fake news problem.

In a response to the concerns expressed about the new law, the communications and multimedia minister Salleh Said Keruak highlighted the foreign media’s failure to get the sometimes complicated string of official titles for high-ranking Malaysians right – irritating, yes, but hardly a threat to national security.

The article goes on to excoriate mainstream media which have published negative pieces about Mr Najib, calling them fake news, and thus rather confirming suspicions that the law is aimed at them, rather than the manipulation of social media opinion through fraudulent Facebook accounts and automated Twitter bots.

‘Better safe than sorry’

Singapore is the other country which has raised the alarm over fake news, holding 50 hours of parliamentary hearings.

Facebook’s policy director Simon Milner was publicly dressed down by the law and home affairs minister K Shanmugam over his failure to acknowledge the full extent of data taken by the data analysis company Cambridge Analytica when he testified to the British parliament earlier this year.

Academics speaking at the Singapore hearings presented an alarming scenario of disinformation campaigns launched by foreign actors bent on attacking the island state, of cyber armies in neighbouring Malaysia and Singapore working as proxies for other countries in undermining national security.

It also gave Singapore academics and officials an opportunity to snipe at the US belief in free expression, the “marketplace of ideas”, which had allowed the abuse of personal data on Facebook to take place, in contrast to Singapore’s “better safe than sorry” belief in a more tightly regulated society.

  • The (almost) complete history of ‘fake news’
  • ‘Fake news’: What’s the best way to tame the beast?
  • The city getting rich from fake news

    But the actual examples of fake news which have come up during this national debate have mostly been prosaic; a hoax photo showing a collapsed roof at a housing complex, which sent officials rushing unnecessarily to the scene; and an erroneous report of a collision between two trains on the light rail transit line.

    Irritating and worrying for some, for a while, but hardly likely to bring Singapore society to its knees. In any case both Singapore and Malaysia already have plenty of laws capable of penalising false, inflammatory or defamatory comment.

    A poisonous tide

    In the country where social media misinformation has had the most devastating impact, by contrast, there is no clamour for a fake news law.

    Myanmar too has a raft of existing harsh laws sweeping enough to stifle any reporting deemed a threat to the state or society, laws which all too often have been used to jail journalists.

    But these laws have been unable to prevent a poisonous tide of hate speech on social media, which has helped ignite anti-Muslim sentiment.

    • UN: Facebook has turned into a beast in Myanmar
    • Myanmar conflict: Fake photos inflame tension
    • Rohingya crisis: Suu Kyi says ‘fake news helping terrorists’

      Myanmar famously leapt from being a society largely without even old-fashioned telephone lines, to one with more than 40 million mobile phone accounts, in just three years.

      Seventeen million people have Facebook accounts, and as in so much of Asia, this is how most Burmese send messages and get their news.

      Most don’t bother with email accounts. This has coincided with the end of strict military censorship, and the emergence in the mainly Buddhist population of a primeval fear of the small Muslim minority.

      It has been all too easy to find cartoons and doctored photographs on Facebook which depict Muslims in a sinister and derogatory way. Worse still, large numbers of posts about Muslims are completely false, with photographs purporting to show atrocities against Buddhists by Muslims which are from a completely different part of the world.

      The government has done nothing to stem this tide of disinformation, at times appearing to encourage it.

      For example the Facebook page of the Myanmar armed forces still has on it a gruesome photograph with a caption stating that the dismembered bodies of infants, being dragged by apparently Muslim men, are Rakhine Buddhists killed by Rohingya militants in 1942.

      In fact the photograph is from the Bangladesh independence war in 1971.

      When journalists, myself among them, were given photographs in September 2017 during a government-run tour of Rakhine state, supposedly showing Muslims burning down their own homes, backing the assertion by officials that this was the cause of the destruction of Rohingya villages, we were quickly able to ascertain that the perpetrators in the photos were actually displaced Hindus dressed up as Muslims.

      Yet the government spokesman posted one of the photos on his Twitter feed proclaiming “It’s Truth”, although he later removed it. I was told in all seriousness in one Rakhine village that Muslims used to cut up Buddhists and cook them with their beef stew.

      These kinds of stories are circulating unchallenged in Myanmar, creating a tide of fear and hate which then intimidates anyone trying to advocate a more tolerant approach into silence.

      The UN Special Rapporteur to Myanmar Yanghee Lee, who has herself been subjected to vicious online abuse for her focus on human rights in Rakhine, and has now been banned from entering the country, has described Facebook there as “a beast”.

      Facebook says it takes the problem of hate speech very seriously, but has yet to stop the site being used to stir up sectarian conflict.

      The social media game changer

      The other country where social media has had a profound impact is the Philippines, where critics of President Duterte have accused his supporters of “weaponising” Facebook and Twitter to twist public opinion and silence dissent.

      Filipinos are among the heaviest users of Facebook in Asia, with more than one third of the population visiting the social media site regularly.

      This is has made it a potentially game-changing arena for political actors who know how best to use it, in a country which has long had a lively and competitive traditional media.

      Long before the 2016 election which propelled Rodrigo Duterte, a late candidate with outsider status, to the presidency, the internet was already being exploited by public relations experts promoting products and opinions with so-called “click factories”, where thousands of low-paid workers raised the clicks for specific websites, and companies openly offering hundreds of fake Facebook or Twitter accounts in support of the online profile of clients.

      After announcing his candidacy in November 2015 Rodrigo Duterte hired social media experts to craft a strategy which outflanked the usual dependence on endorsement from mainstream newspapers and television channels.

      It worked brilliantly, tapping into a hitherto unarticulated yearning for change among many Filipinos.

      But researchers have also detected what they believe is the use of automated bots and fake Facebook accounts to amplify the pro-Duterte message, something the president’s team has denied.

      The online news site Rappler published a detailed report about this in October 2016, enraging Mr Duterte’s supporters, and, it believes, prompting the ruling in January this year by the Philippines Securities and Exchange Commission that the site is illegally owned by foreign investors, a claim first made by the president last year.

      Rappler also highlighted the way Facebook’s algorithms could be “gamed” to ensure certain content dominates users’ newsfeeds.

      Leaving aside the allegations of social media manipulation, what President Duterte’s supporters have succeeded in doing is using Facebook and other sites to wage a war of words against his critics, or anyone publishing unfavourable reports.

      I experienced this in September 2016 after the BBC published a report on Mr Duterte’s campaign against drugs dealers and users which has resulted in thousands of police and extrajudicial killings.

      • Philippines’ Duterte admits personally killing suspects
      • The human scars of Philippines drug war
      • Why Rappler is raising Philippine press freedom fears

        I received a flood of hostile messages, and a few death threats on my Facebook page, and the BBC complaints site was swamped with almost identical protests over “erroneous and biased” reporting. Maria Ressa, the CEO and founder of Rappler, was at one point getting 90 hate messages an hour.

        In part this militant response has been shaped by Mr Duterte’s own depiction of his presidency, more as an existential struggle to save his country than just another administration.

        Mr Duterte uses emotive and bellicose language to describe his mission, openly threatening to kill those who stand in his way, including journalists, and suggesting he may in turn be killed, by unnamed enemies.

        Having successfully motivated those who voted for him into believing he could be a one-man saviour for the many ailments afflicting the Philippines, he, like President Trump in the US, has also instilled in his supporters a deep mistrust of traditional mainstream news sources, as controlled by powerful vested interests set on ensuring the failure of his presidency – “presstitutes”, in their preferred term.

        Nowhere has the political climate been more polarised than in Myanmar and the Philippines.

        Yet hearings at the Philippines Senate concluded that a specific fake news law was unnecessary, and possibly counterproductive.

        Clarissa David, a professor of mass communications at the University of the Philippines, testified to the Senate about the dangers of an information environment she described as “polluted”, with no one sure any more what is real and reliable, and what is fake.

        But she warned against easy definitions of fake news. And trying to outlaw it, she argued, is not worth the inevitable cost there will be for media freedom.

        Hers is an argument which was made, but lost, in Malaysia.

Facebook boss apologises in UK and US newspaper ads

Facebook boss Mark Zuckerberg has taken out full-page adverts in several UK and US Sunday newspapers to apologise for the firm’s recent data privacy scandal.

He said Facebook could have done more to stop millions of users having their data exploited by political consultancy Cambridge Analytica in 2014.

“This was a breach of trust, and I am sorry,” the back-page ads state.

It comes amid reports Facebook was warned its data protection policies were too weak back in 2011.

The full-page apology featured in broadsheets and tabloids in the UK, appearing on the back page of the Sunday Telegraph, Sunday Times, Mail on Sunday, Observer, Sunday Mirror and Sunday Express.

In the US, it was seen by readers of the New York Times, Washington Post and Wall Street Journal.

  • Facebook’s Zuckerberg speaks out over Cambridge Analytica ‘breach’
  • Facebook boss summoned over data claims

    In the advert, Mr Zuckerberg said a quiz developed by a university researcher had “leaked Facebook data of millions of people in 2014”.

    “I’m sorry we didn’t do more at the time. We’re now taking steps to make sure this doesn’t happen again,” the tech chief said.

    It echoes comments Mr Zuckerberg made last week after reports of the leak prompted investigations in Europe and the US, and knocked billions of dollars of Facebook’s market value.

    Mr Zuckerberg repeated that Facebook had already changed its rules so no such breach could happen again.

    “We’re also investigating every single app that had access to large amounts of data before we fixed this. We expect there are others,” he stated.

    “And when we find them, we will ban them and tell everyone affected.”

    The ads contained no mention of the political consultancy accused of using the leaked data, Cambridge Analytica, which worked on US President Donald Trump’s 2016 campaign.

    The British firm has denied wrongdoing.

    What is the row about?

    In 2014, Facebook invited users to find out their personality type via a quiz developed by Cambridge University researcher, Dr Alexsandr Kogan called This is Your Digital Life.

    About 270,000 users’ data was collected, but the app also collected some public data from users’ friends without their knowledge.

    Facebook has since changed the amount of data developers can gather in this way, but a whistleblower, Christopher Wylie, says the data of about 50 million people was harvested for Cambridge Analytica before the rules on user consent were tightened up.

    Mr Wylie claims the data was sold to Cambridge Analytica which then used it to psychologically profile people and deliver pro-Trump material to them during the 2016 US presidential election campaign.

    Facebook has said Dr Kogan passed this information on to Cambridge Analytica without its knowledge. And Cambridge Analytica has blamed Dr Kogan for any potential breach of data rules.

    But Dr Kogan has said he was told by Cambridge Analytica everything they had done was legal, and that he was being made a “scapegoat” by the firm and Facebook.

    Did Facebook get a warning seven years ago?

    As first reported in the Sunday Telegraph, Ireland’s Data Protection Commissioner (DPC) warned Facebook’s security policies were too weak to stop abuse in 2011, some three years before the breach took place.

    Following an audit, the DPC said relying on developers to follow information rules in some cases was not good enough “to ensure security of user data”.

    It also said Facebook processes to stop abuse were not strong enough to “assure users of the security of their data once they have third party apps enabled”.

    Facebook said it strengthened its protections following the recommendations and was told it had addressed the DPC’s original concerns after a second audit in 2012. The tech firm also said it changed its platform entirely in 2014 with the regulator’s recommendations in mind.