TED 2018: Fake Obama video creator defends invention

A researcher who created a fake video of President Obama has defended his invention at the latest TED talks.

The clip shows a computer-generated version of the former US leader mapped to fit an audio recording. Experts have warned the tech involved could spark a “political crisis”.

Dr Supasorn Suwajanakorn acknowledged that there was a “potential for misuse”.

But, at the Vancouver event, he added the tech could be a force for good.

The computer engineer is now employed by Google’s Brain division. He is also working on a tool to detect fake videos and photos on behalf of the AI Foundation.

Damage risk

Dr Suwajanakorn, along with colleagues Steven Seitz and Ira Kemelmacher-Shlizerman from the University of Washington, released a paper in July 2017 describing how they created the fake Obama.

They developed an algorithm that took audio and transposed it on to a 3D model of the president’s face.

The task was completed by a neural network, using 14 hours of Obama speeches and layering that data on top of a basic mouth shape.

Dr Suwajanakorn acknowledged that “fake videos can do a lot of damage” and needed an ethical framework.

“The reaction to our work was quite mixed. People, such as graphic designers, thought it was a great tool. But it was also very scary for other people,” he told the BBC.

Political crisis

It could offer history students the chance to meet and interview Holocaust victims, he said. Another example would be to let people create avatars of dead relatives.

Experts remain concerned that the technology could create new types of propaganda and false reports.

“Fake news tends to spread faster than real news as it is both novel and confirms existing biases,” said Dr Bernie Hogan, a senior research fellow at the Oxford Internet Institute.

“Seeing someone make fake news with real voices and faces, as seen in the recent issue about deepfakes, will likely lead to a political crisis with associated calls to regulate the technology.”

Deepfakes refers to the recent controversy over an easy-to-use software tool that scans photographs and then uses them to substitute one person’s features with another. It has been used to create hundreds of pornographic video clips featuring celebrities’ faces.

Dr Suwajanakorn said that while fake videos were a new phenomenon, it was relatively easy to detect forgeries.

“Fake videos are easier to verify that fake photos because it is hard to make all the frames in video perfect,” he told the BBC.

“Teeth and tongues are hard to model and could take another decade,” he added.

The researcher also questioned whether it made sense for fake news creators to make complex videos “when they can just write fake stories”.

Russia to block Telegram app over encryption

A court in Moscow has approved a request from the Russian media regulator to block the Telegram messaging app immediately.

The media regulator sought to block the app because the firm had refused to hand over encryption keys used to scramble messages.

Security officials say they need to monitor potential terrorists.

But the company said the way the service was built meant it had no access to customers’ encryption keys.

Telegram had missed a deadline of 4 April to hand over the keys.

Russia’s main security agency, the FSB, has said Telegram is the messenger of choice for “international terrorist organisations in Russia”.

A suicide bomber who killed 15 people on a subway train in St Petersburg last April used the app to communicate with accomplices, the FSB said last year.

The app is also widely used by the Russian authorities, Reuters news agency reports.

In its court filing, media regulator Roskomnadzor said Telegram had failed to comply with its legal requirements as a “distributor of information”.

Telegram’s lawyer, Pavel Chikov, said the official attempt to stop the app being used in Russia was “groundless”.

“The FSB’s requirements to provide access to private conversations of users are unconstitutional, baseless, which cannot be fulfilled technically and legally,” he said.

The messaging app is widely used across Russia and many nations in the Middle East, as well as around the rest of the world. It says it has more than 200 million active users.

Its popularity has grown because of its emphasis on encryption, which thwarts many widely used methods of reading confidential communications.

It allows groups of up to 5,000 people to send messages, documents, videos and pictures without charge and with complete encryption.

Telegram has been used by the Islamic State (IS) group and its supporters though the company says it has made efforts to close down pro-IS channels.

Donkey Kong champion loses title for ‘using emulator’

The Donkey Kong world champion has been stripped of a record he set for the 1981 arcade classic following claims he did not use an original machine.

Twin Galaxies, which tracks video games records, believes that Billy Mitchell’s 1,047,200 score on the original Donkey Kong was achieved using an emulator.

An emulator is a computer program that replicates an arcade or console machine.

Twin Galaxies was alerted by Donkey Kong forum moderator Jeremy Young.

Mr Young suspected that images seen in a video of Mr Mitchell’s Donkey Kong world record run were impossible to generate on an arcade machine.

He believed that the game must have been played on an emulator.

Twin Galaxies then did its its own investigations, which led it to conclude that Mr Mitchell had not used the original arcade machine.

In a post on its findings, Twin Galaxies said: “Based on the complete body of evidence presented in this official dispute thread, Twin Galaxies administrative staff has unanimously decided to remove all of Billy Mitchell’s scores as well as ban him from participating in our competitive leaderboards.

“We have notified Guinness World Records of our decision.”

The BBC has contacted Mr Mitchell for comment.

As well as conducting its own investigation, Twin Galaxies said it had taken into account at least two different third-party inquiries which came to “identical conclusions”.

It concluded: “With this ruling, Twin Galaxies can no longer recognise Billy Mitchell as the first million-point Donkey Kong record holder. “

It said that record would now be bestowed on another gamer, Steve Wiebe.

Russia seeks to block Telegram messaging app

The Russian government has started legal proceedings to block the Telegram messaging app in the country.

The Roskomnadzor media regulator is seeking the block because the firm has refused to hand over encryption keys used to scramble messages.

Telegram, which is based in Dubai, was given a deadline of 4 April to hand over the keys.

The company has refused, saying the way the service is built means it has no access to them.

Russia’s main security agency, the FSB, wants the keys so it can read messages and prevent future terror attacks in the country.

The messaging app is widely used across Russia and many nations in the Middle East. It claims to have more than 200 million active users.

Its popularity has grown because of its emphasis on encryption which thwarts many widely used methods of reading confidential communications.

Telegram is also currently trying to raise funds for expansion via an Initial Coin Offering. This involves investors buying into a crypto-currency, called Gram, created and run by the firm. So far, it is believed to have raised about $1.7bn (£1.2bn) via this funding scheme.

Apple Watch provides murder case clues

Police in Australia have presented data gathered from an Apple Watch as evidence in a murder trial.

Grandmother Myrna Nilsson was wearing the device when she was killed in 2016.

Her daughter-in-law Caroline Nilsson is accused of staging an ambush, after claiming she was tied up by a group of men who entered the house.

But data from the victim’s smartwatch suggests that she was ambushed as she arrived home, and died hours earlier than Ms Nilsson claims.

‘Ambush’

Ms Nilsson told police that her mother-in-law had been followed home by a group of men in a car.

According to ABC News, Ms Nilsson said her mother-in-law had argued with the men outside the house for about 20 minutes, but she did not hear the fatal attack because she was in the kitchen with the door closed.

A neighbour called the police when Ms Nilsson emerged from the house gagged and distressed after 22:00.

Ms Nilsson says the attackers had tied her up and that she had made her way out of the house as soon as they had left.

But prosecutor Carmen Matteo said evidence from the victim’s smartwatch suggested Ms Nilsson had staged the home invasion.

The body of 57-year-old Myrna Nilsson was found in the laundry room of her home in Valley View, Adelaide, in September 2016.

Evidence

“The evidence from the Apple Watch is a foundational piece of evidence for demonstrating the falsity of the defendant’s account to police,” said Ms Matteo.

“A watch of this type… contains sensors capable of tracking the movement and rate of movement of the person wearing it… it also measures the heart rate.”

The prosecution alleged that the watch had recorded data consistent with a person going into shock and losing consciousness.

“The deceased must have been attacked at around 6:38pm and had certainly died by 6:45pm,” she said.

“If that evidence is accepted, it tends to contradict the accused’s version of an argument occurring between the deceased and these men outside the laundry for a period of up to 20 minutes.

“Her emergence from the house was well after 10:00pm and if the Apple Watch evidence is accepted, that is over three hours after the attack on the deceased.”

Magistrate Oliver Koehn denied Ms Nilsson bail based on the “apparent strength of the prosecution’s case”. The trial will continue in June.

Brain back-up firm Nectome loses link to MIT

A company attempting to map people’s brains so their memories can be stored in computers has lost its link to one of the United States’ top universities.

US start-up Nectome revealed its brain back-up plan last month, warning at the time that the process involved would be “100% fatal”.

A number of neuroscientists subsequently poured scorn on the plan.

The Massachusetts Institute of Technology (MIT) has now announced that it is severing ties with the project.

One of the university’s professors had previously benefitted from a federal grant given to Nectome and was attempting to combine its work with his own research into mouse brains.

“Neuroscience has not sufficiently advanced to the point where we know whether any brain preservation method is powerful enough to preserve all the different kinds of biomolecules related to memory and the mind,” said the MIT in a blog explaining its decision.

Nectome has responded saying: “We appreciate the help MIT has given us, understand their choice, and wish them the best.”

‘Potential to benefit humanity’

The university’s in-house publication MIT Technology Review had been first to draw attention to Nectome’s plans and the educational establishment’s own involvement.

It has since reported that the collaboration had attracted “sharp criticism” from experts in the field, who feared it lent credibility to an effort that was doomed to fail.

“Fundamentally, the company is based on a proposition that is just false,” said Sten Linnarsson of the Karolinska Institute in Sweden.

“[And there’s a risk] some people actually kill themselves to donate their brains.”

Nectome had previously said that it believed it would one day be possible to survey connectome – the neural connections within the brain – to such a detailed degree that it would be able to reconstruct a person’s memories.

In order to achieve this, the brain must be preserved at point of death – a process called vitrifixation.

MIT Technology Review had reported that the firm was soon hoping to test its theories on the head of someone planning a doctor-assisted suicide.

However, Nectome has acknowledged that its work is at a relatively early stage.

“We believe that clinical human brain preservation has immense potential to benefit humanity, but only if it is developed in the light, with input from medical and neuroscience experts,” it said in a statement posted to its website.

“We believe that rushing to apply vitrifixation today would be extremely irresponsible and hurt eventual adoption of a validated protocol.”

Despite the sceptics, Nectome has won a $960,000 (£687,000) grant from the US National Institute of Mental Health.

It is also backed by Y Combinator, a high-profile Silicon Valley-based funder that previously invested in Dropbox, Airbnb and Reddit among others.

HTC’s Tom Daley swimming pool selfie advert banned

An advert showing British Olympic diver Tom Daley using a smartphone at a swimming pool has been banned on the grounds that similar behaviour by consumers would damage the device.

HTC has promoted the ad on social media since mid-2017.

But an investigation by the UK’s advertising watchdog discovered that the device’s own instructions said the phone should not come into contact with pool water.

HTC apologised on Thursday.

The Taiwanese firm also removed the promotion from its YouTube channel.

In a statement, it said: “We are disappointed by the ASA ruling, but have removed the video from our sites.

“We apologise if anyone felt misled as to the handset’s water resistant capabilities.”

  • Amazon listings ‘misled’ customers over savings
  • Poundland ‘naughty’ elf ad deemed ‘irresponsible’
  • ‘Scientific’ ad by eHarmony banned
  • Geordie Shore star broke ad rules on Snapchat

    The advert was designed to highlight the HTC U11’s squeezable sides, which – when pressed – trigger a photo from its front “selfie” camera.

    It showed Mr Daley repeatedly jumping from the highest platform at a swimming pool, and taking images of himself as he fell.

    In addition, the athlete was shown using the phone as he climbed out of the water.

    HTC defended the campaign on the grounds that the device’s IP67 waterpoof rating meant it could be briefly submerged up to a depth of 1m (3.3ft).

    It said that because Mr Daley had entered the water with his feet and held the phone above his head, the phone had not gone any deeper.

    Furthermore, an on-screen warning had told viewers not to “try this stunt”.

    But the Advertising Standards Authority said that a normal member of the public attempting something similar would be unlikely to be able to prevent their phone sinking below 1m.

    The watchdog also noted that HTC had acknowledged that “there were too many variations of water temperature and chemical composition” to be able to say that the U11 could be used in most swimming pools.

    And it highlighted that the device’s own instructions said it should not be intentionally submerged in water, and were that to happen by accident its buttons should not be pressed immediately afterwards.

    As a result it judged the ad to have exaggerated the phone’s capabilities in a misleading manner.

    Still online

    Although the original complaint had been about the appearance of the ad on Facebook, the ASA said that other posts must now be dealt with.

    “Our ruling against HTC applies across media,” a spokesman told the BBC.

    “We expect HTC to ensure its ad is removed from all media and we’ll be contacting the company to remind them of that.”

    The BBC understands that the firm intends to remove the ad from all its global accounts by the end of the day.

    The ASA also issued other tech-related rulings among its latest decisions, including:

    • a ban of an ad for the OnePlus 5 smartphone, which had featured “excessive gore”
    • finding that four listings on Amazon’s UK website had misrepresented the size of the discounts offered on electronic goods
    • agreeing that a TV ad by Barclays Bank had misleadingly implied that if a website featured a green padlock symbol in its browser address bar then it could be trusted
    • judging that the charity Electrosensitivity-UK held inadequate evidence to back up claims in a poster that wi-fi and other types of electromagnetic radiation posed health risks

Facebook ‘ugly truth’ growth memo haunts firm

A Facebook executive’s memo that claimed the “ugly truth” was that anything it did to grow was justified has been made public, embarrassing the company.

The 2016 post said that this applied even if it meant people might die as a result of bullying or terrorism.

Both the author and the company’s chief executive, Mark Zuckerberg, have denied they actually believe the sentiment.

But it risks overshadowing the firm’s efforts to tackle an earlier scandal.

Facebook has been under intense scrutiny since it acknowledged that it had received reports that a political consultancy – Cambridge Analytica – had not destroyed data harvested from about 50 million of its users years earlier.

The memo was first made public by the Buzzfeed news site, and was written by Andrew Bosworth.

The 18 June 2016 memo:

So we connect more people.

That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack co-ordinated on our tools.

And still we connect people.

The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.

[…]

That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.

Read the post in full

Mr Bosworth – who co-invented Facebook’s News Feed – has held high-level posts at the social network since 2006, and is currently in charge of its virtual reality efforts.

Mr Bosworth has since tweeted that he “didn’t agree” with the post at the time he had posted it, but he had shared it with staff to be “provocative.”

“Having a debate around hard topics like these is a critical part of our process and to do that effectively we have to be able to consider even bad ideas,” he added.

  • Facebook privacy settings revamped after scandal
  • Cambridge Analytica files spell out election tactics
  • Data row: Facebook’s Zuckerberg will not appear before MPs

    Mark Zuckerberg has issued his own statement.

    “Boz is a talented leader who says many provocative things,” it said.

    “This was one that most people at Facebook including myself disagreed with strongly. We’ve never believed the ends justify the means.”

    A follow-up report by the Verge revealed that dozens of Facebook’s employees have subsequently used its internal chat tools to discuss concerns that such material had been leaked to the media.

    Analysis

    By Rory Cellan-Jones, Technology correspondent

    What immediately struck me about this leaked memo was the line about “all the questionable contact importing practices”.

    When I downloaded my Facebook data recently, it was the presence of thousands of my phone contacts that startled me. But the company’s attitude seemed to be that this was normal and it was up to users to switch off the function if they didn’t like it.

    What we now know is that in 2016 a very senior executive thought this kind of data gathering was questionable.

    So, why is it only now that the company is having a debate about this and other dubious practices?

    Until now, Facebook has not been leaky. Perhaps we will soon get more insights from insiders as this adolescent business tries to grow up and come to terms with its true nature.

    Fact checking

    The disclosure coincided with Facebook’s latest efforts to address the public and investors’ concerns with its management.

    Its shares are trading about 14% lower than they were before the Cambridge Analytica scandal began, and several high profile figures have advocated deleting Facebook accounts.

    The company hosted a press conference on Thursday, at which it said it had:

    • begun fact-checking photos and videos posted in France, and would expand this to other countries soon
    • developed a new fake account investigative tool to prevent harmful election-related activities
    • started work on a public archive that will make it possible for journalists and others to investigate political-labelled ads posted to its platform

      In previous days it had also announced a revamp of its privacy settings, and said it would restrict the amount of data exchanged with businesses that collect information on behalf of advertisers.

      The latest controversy is likely, however, to provide added ammunition for critics.

      CNN reported earlier this week that Mr Zuckerberg had decided to testify before Congress “within a matter of weeks” after refusing a request to do so before UK MPs. However, the BBC has been unable to independently verify whether he answer questions in Washington.

Facebook’s Zuckerberg fires back at Apple’s Tim Cook

Facebook’s chief executive has defended his leadership following criticism from his counterpart at Apple.

Mark Zuckerberg said it was “extremely glib” to suggest that because the public did not pay to use Facebook that the firm did not care about them.

Last week, Apple’s Tim Cook said it was an “invasion of privacy” to traffic in users’ personal lives.

And when asked what he would do if he were Mr Zuckerberg, Mr Cook replied: “I wouldn’t be in that situation.”

Facebook has faced intense criticism after it emerged that it had known for years that Cambridge Analytica had harvested data from about 50 million of its users, but had relied on the political consultancy to self-certify that it had deleted the information.

Channel 4 News has since reported that at least some of the data in question is still in circulation despite Cambridge Analytica insisting it had destroyed the material.

Mr Zuckerberg was asked about Mr Cook’s comments during a lengthy interview given to news site Vox about the privacy scandal.

He also acknowledged that Facebook was still not transparent enough about some of the choices it had taken, and floated the idea of an independent panel being able to override some of its decisions.

‘Dire situation’

Mr Cook has spoken in public twice since Facebook’s data-mining controversy began.

On 23 March, he took part in the China Development Forum in Beijing.

“I think that this certain situation is so dire and has become so large that probably some well-crafted regulation is necessary,” news agency Bloomberg quoted him as saying in response to a question about the social network’s problems.

“The ability of anyone to know what you’ve been browsing about for years, who your contacts are, who their contacts are, things you like and dislike and every intimate detail of your life – from my own point of view it shouldn’t exist.”

  • Facebook haunted by ‘ugly truth’ memo
  • Facebook privacy settings revamped after scandal
  • Zuckerberg will not appear before MPs

    Then in an interview with MSNBC and Recode on 28 March, Mr Cook said: “I think the best regulation is no regulation, is self-regulation. However, I think we’re beyond that here.”

    During this second appearance – which has yet to be broadcast in full – he added: “We could make a tonne of money if we monetised our customer, if our customer was our product. We’ve elected not to do that… Privacy to us is a human right.”

    Apple makes most of its profits from selling smartphones, tablets and other computers, as well as associated services such as online storage and its various media stores.

    This contrasts with other tech firms whose profits are largely derived from advertising, including Google, Twitter and Facebook.

    Mr Zuckerberg had previously told CNN that he was “open” to new regulations.

    But he defended his business model when questioned about Mr Cook’s views, although he mentioned neither Apple nor its leader by name.

    “I find that argument, that if you’re not paying that somehow we can’t care about you, to be extremely glib and not at all aligned with the truth,” he said.

    “The reality here is that if you want to build a service that helps connect everyone in the world, then there are a lot of people who can’t afford to pay.”

    He added: “I think it’s important that we don’t all get Stockholm syndrome and let the companies that work hard to charge you more convince you that they actually care more about you, because that sounds ridiculous to me.”

    Mr Zuckerberg also defended his leadership by invoking Amazon’s chief executive.

    “I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business,” he said.

    “I thought Jeff Bezos had an excellent saying: “There are companies that work hard to charge you more, and there are companies that work hard to charge you less.”

    ‘Turned into a beast’

    Elsewhere in the 49-minute interview, Mr Zuckerberg said he hoped to make Facebook more “democratic” by giving members a chance to challenge decisions its own review team had taken about what content to permit or ban.

    Eventually, he said, he wanted something like the “Supreme Court”, in which people who did not work for the company made the ultimate call on what was acceptable speech.

    Mr Zuckerberg also responded to recent criticism from a UN probe into allegations of genocide against the Rohingya Muslims in Myanmar.

    Last month, one of the human rights investigators said Facebook had “turned into a beast” and had “played a determining role” in stirring up hatred against the group.

    Mr Zuckerberg claimed messages had been sent “to each side of the conflict” via Facebook Messenger, attempting to make them go to the same locations to fight.

    But he added that the firm had now set up systems to detect such activity.

    “We stop those messages from going through,” he added.

    “But this is certainly something that we’re paying a lot of attention to.”

Killer robots: Experts warn of ‘third revolution in warfare’

More than 100 leading robotics experts are urging the United Nations to take action in order to prevent the development of “killer robots”.

In a letter to the organisation, artificial intelligence (AI) leaders, including billionaire Elon Musk, warn of “a third revolution in warfare”.

The letter says “lethal autonomous” technology is a “Pandora’s box”, adding that time is of the essence.

The 116 experts are calling for a ban on the use of AI in managing weaponry.

“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the letter says.

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways,” it adds.

There is an urgent tone to the message from the technology leaders, who warn that “we do not have long to act”.

“Once this Pandora’s box is opened, it will be hard to close.”

  • Warning against AI arms race
  • Soldiers that never sleep
  • AI fighter pilot wins in combat

    Experts are calling for what they describe as “morally wrong” technology to be added to the list of weapons banned under the UN Convention on Certain Conventional Weapons (CCW).

    Along with Tesla co-founder and chief executive Mr Musk, the technology leaders include Mustafa Suleyman, Google’s DeepMind co-founder.

    A UN group focusing on autonomous weaponry was scheduled to meet on Monday but the meeting has been postponed until November, according to the group’s website.

    A potential ban on the development of “killer robot” technology has previously been discussed by UN committees.

    In 2015, more than 1,000 tech experts, scientists and researchers wrote a letter warning about the dangers of autonomous weaponry.

    Among the signatories of the 2015 letter were scientist Stephen Hawking, Apple co-founder Steve Wozniak and Mr Musk.

    What is a ‘killer robot’?

    A killer robot is a fully autonomous weapon that can select and engage targets without human intervention. They do not currently exist but advances in technology are bringing them closer to reality.

    Those in favour of killer robots believe the current laws of war may be sufficient to address any problems that might emerge if they are ever deployed, arguing that a moratorium, not an outright ban, should be called if this is not the case.

    However, those who oppose their use believe they are a threat to humanity and any autonomous “kill functions” should be banned.

    Get news from the BBC in your inbox, each weekday morning

    View comments