Reddit admits hosting Russian propaganda

Reddit has become the latest social-media platform to admit that Russian propaganda was used on its site during the 2016 US presidential election.

It follows leaks from news site The Daily Beast showing a Russian troll farm active on the website.

Co-founder Steve Huffman said that it had removed “a few hundred accounts” suspected of being of Russian origin.

In a blogpost, he said “indirect propaganda”, which was more complex to spot and stop, was the biggest issue.

“For example, the Twitter account @TEN_GOP is now known to be run by a Russian agent. Its tweets were amplified by thousands of Reddit users, and sadly, from everything we can tell, these users are mostly American and appear to be unwittingly promoting Russian propaganda.”

Conspiracy theories

Mr Huffman added: “I believe the biggest risk we face as Americans is our own ability to discern reality from nonsense, and this is a burden we all bear.

“I wish there was a solution as simple as banning all propaganda, but it’s not that easy. Between truth and fiction are a thousand shades of grey.

“It’s up to all of us—Redditors, citizens, journalists—to work through these issues.”

The @TEN_GOP account appeared to be run by Republicans in Tennessee. It tweeted a mix of pro-Trump content and conspiracy theories, as well as more obvious fake news stories.

The Daily Beast investigation suggested no outright support of any particular candidate or viewpoint and concluded that Russia’s aim was to provoke and divide Americans on the internet and, as a result, in the physical world too.

Social media ‘weapon’

Social media platforms are under increased scrutiny from the US Congress over the issue of Russian meddling in the 2016 election.

Facebook has given the Senate Intelligence Committee thousands of ads believed to have been purchased by Russian agents.

The Washington Post reported that Reddit was now likely to be questioned over its involvement in the “weaponisation of social media” during the election.

Special counsel Robert Mueller has charged 13 Russians with interfering in the US election, all of whom are linked to troll farm the Internet Research Agency.

Meanwhile, pressure is mounting on Reddit to clean up the content on its platform.

In February, it banned a group that was generating fake porn – imagery and videos that superimpose a person’s face over an explicit photo or video without permission.

This week, it emerged that another subreddit was sharing images of dead babies and animals being harmed.

Mr Huffman said the company was aware of the group, which currently has nearly 19,000 subscribers, and that the community was “under review”.

Facebook Messenger used to fight extremism

Facebook Messenger has been used to try to deradicalise extremists in a pilot project funded entirely by the company.

People posting extreme far-right and Islamist content in the UK were identified and contacted in an attempt to challenge their views.

Of the 569 people contacted, 76 had a conversation of five or more messages and eight showed signs it had a positive impact, researchers claim.

Privacy campaigners say it means Facebook is straying into surveillance.

Technology companies have been urged to do more to stop extremist material littering their sites following a series of cases involving people who were radicalised online.

This pilot was led by the counter-extremism organisation Institute for Strategic Dialogue (ISD), which says it was trying to mimic extremists’ own recruitment methods.

It told the BBC’s Victoria Derbyshire programme and BBC World Service’s World Hacks it used software to scan several far-right and Islamist pages on Facebook for targets. It then manually looked at their profiles looking for instances of violent, dehumanising and hateful language.

Terrorism survivors

It employed 11 “intervention providers” – either former extremists, survivors of terrorism or trained counsellors, who were paid £25 per hour for eight hours’ work a week.

One was Colin Bidwell, who was caught up in the Tunisia terror attack in 2015.

Under a fake profile, he spoke to people who appeared to support Islamist extremism, including some who may support the Tunisia gunman, and was tasked with challenging their views with chatty conversation and questions.

“I think I’m entitled to ask those questions after what I’ve been through,” he explained. “If there’s the smallest chance that I could make some form of difference or awareness, for me I’m in.”

Many did not respond, but some entered into long conversations. Mr Bidwell would talk a little about religion, about the effect the attack has had on his wife and how he worries for the future of his children in “such a violent world”.

“One of the things I would say is, ‘You can have your extreme beliefs, but when it gets to the extreme violence – that’s the bit I don’t understand’,” he said.

Other intervention providers would use different tactics depending on their background – a former extremist targeted young women telling them she used to think like they did, but that violence was not the answer.

‘Back from the edge’

Roughly half the people they chose to try and chat with had showed support for Islamist extremism and half had far-right sympathies. The group was also split evenly between men and women.

The aim was to “walk them back from the edge, potentially, of violence”, said Sasha Havlicek, the chief executive of the ISD.

“We were trying to fill a really big gap in responses to online recruitment and radicalisation and that gap is in the direct messaging space.

“There’s quite a lot of work being done to counter general propaganda with counter-speech and the removal of content, but we know that extremists are very effective in direct messaging,” she explained.

“And yet there’s no systematic work being done to reach out on that direct engagement basis with individuals being drawn into these groups.”

Privacy campaigners are concerned about the project, especially that Facebook funded something that broke its own rules by creating fake profiles.

Millie Graham Wood, a solicitor at the Privacy International charity, said: “If there’s stuff that they’re identifying that shouldn’t be there, Facebook should be taking it down.

“Even if the organisation [ISD] itself may have been involved in doing research over many years, that does not mean that they’re qualified to carry out this sort of… surveillance role.”

‘Really authentic’

Facebook funded the initiative but would not disclose how much it had spent. It said it did not give ISD special access to its users’ profiles.

Its public policy manager, Karim Palant, said the company does not allow the creation of fake profiles – which the project relied on – and said that the research was done without Facebook interference.

“The research techniques and exactly what they did was a matter for them,” he said.

During conversations, the intervention providers did not volunteer the fact that they were working for the ISD, unless asked directly. This happened seven times during the project, and on those occasions the conversation ended, sometimes after a row.

Overall, of the 569 people contacted, researchers claim eight of the people contacted showed signs, in the conversations, of rethinking their views.

Despite the small numbers involved, the ISD argue the pilot showed online counter-extremism conversations can make a difference.

It wants to now explore how it could be expanded both in the UK, and overseas, and how a similar method could be used on platforms such as Instagram, Reddit, and Twitter.

Watch the BBC’s Victoria Derbyshire programme on weekdays between 09:00 and 11:00 on BBC Two and the BBC News Channel.

Twitter bot purge prompts backlash

The hashtag #TwitterLockout has trended after an apparent purge of suspected malicious bots on the social network.

Dozens of users report having had their accounts suspended until they provided a telephone number which they then had to verify, to prove they were real.

Some members have raised concerns about their amount of lost followers, and claimed discrimination against right-wing political beliefs.

Others have in turn mocked allegations of bias.

“Twitter’s tools are apolitical, and we enforce our rules without political bias,” the social network has said in response.

“Every day we proactively look for suspicious account behaviours that indicate inorganic or automated activity, violations of our policies around having multiple accounts, or abuse.

“And every day we take action on any accounts we find that violate our terms of service, including by asking account owners to confirm a phone number so we can confirm a human is behind it.

“This is part of our ongoing, comprehensive efforts to make Twitter safer and healthier for everyone.”

The firm allows automated software to be used to send tweets under some circumstances, but forbids the posted content from being misleading.

It has also issued new guidance about the use of automation and having multiple accounts.

The action follows an indictment announced last week by special counsel Robert Mueller against 13 Russian nationals and three Russian firms.

They are alleged to have used fake accounts on Twitter and other social media platforms to conduct “information warfare against the United States”.

Twitter and Facebook had faced criticism from US lawmakers earlier in the year for not having taken the problem seriously enough.

‘Junk news’

One researcher who has studied digital disinformation campaigns said a Twitter crackdown should come as no surprise.

“This is a company that’s under a lot of heat to clean up its act in terms of how its platform has been exploited to spread misinformation and junk news,” said Samantha Bradshaw from the University of Oxford’s Computational Propaganda Project.

“It now needs to rebuild trust with users and legislators to show it is trying to take action against these threats against democracy.”

Criminals hide ‘billions’ in crypto-cash – Europol

Three to four billion pounds of criminal money in Europe is being laundered through cryptocurrencies, according to Europol.

The agency’s director Rob Wainwright told the BBC’s Panorama that regulators and industry leaders need to work together to tackle the problem.

The warning comes after Bitcoin’s value fell by half from record highs in December.

UK police have not commented to the programme.

Mr Wainwright said that Europol, the European Union Agency for Law Enforcement Cooperation, estimates that about 3-4% of the £100bn in illicit proceeds in Europe are laundered through cryptocurrencies.

“It’s growing quite quickly and we’re quite concerned,” he said.

  • What is Bitcoin?
  • Bitcoin – risky bubble or the future?
  • Bitcoin energy use in Iceland set to overtake homes

    There many different types of cryptocurrencies but the best known is Bitcoin. They are intended to be a digital alternative to pounds, dollars or euros.

    However, unlike traditional currencies, they are not printed by governments and traditional banks, nor controlled or regulated by them.

    Instead, digital coins are created by computers running complex mathematical equations, a process known as “mining”. A network of computers across the world then keeps track of the transactions using virtual addresses, hiding individual identities.

    The anonymous and unregulated nature of virtual currencies is attracting criminals, making it hard for police to track them as it is difficult to identify who is moving payments.

    ‘Money mules’

    Mr Wainwright said: “They’re not banks and governed by a central authority so the police cannot monitor those transactions.

    “And if they do identify them as criminal they have no way to freeze the assets unlike in the regular banking system.”

    Another problem Europol has identified involves the method that criminals use to launder money.

    Proceeds from criminal activity are being converted into bitcoins, split into smaller amounts and given to people who are seemingly not associated with the criminals but who are acting as “money mules”.

    These money mules then convert the bitcoins back into hard cash before returning it to the criminals.

    “It’s very difficult for the police in most cases to identify who is cashing this out,” Mr Wainwright said.

    He said that police were also seeing a trend where money “in the billions” generated from street sales of drugs across Europe is being converted into bitcoins.

    He called on those running the Bitcoin industries to work with enforcement agencies.

    “They have to take a responsible action and collaborate with us when we are investigating very large-scale crime,” he said.

    “I think they also have to develop a better sense of responsibility around how they’re running virtual currency.”

    ‘Too slow’

    Although British police have yet to respond to requests from Panorama, Parliament is seeking to step up regulations.

    The Treasury Select Committee is looking into cryptocurrencies and details of EU-wide regulations to force traders to disclose identities and any suspicious activity are expected later this year.

    Alison McGovern, Labour MP for Wirral South who is serving on the committee, has been calling for an inquiry into cryptocurrencies.

    “I think that will draw the attention of the Treasury and the Bank [of England] and others to how we put in place a regulatory system,” she said.

    “I think probably hand on heart we have all been too slow, but the opportunity is not lost, and we should all get on with the job now.”

    “Who Wants to be a Bitcoin Millionaire?” is a collaboration between BBC Click and Panorama and airs on BBC One on 12 February at 20:30 GMT.

    View comments

Facebook moderator: I had to be prepared to see anything

“It’s mostly pornography,” says Sarah Katz, recalling her eight-month stint working as a Facebook moderator.

“The agency was very upfront about what type of content we would be seeing, in terms of how graphic it was, so we weren’t left in the dark.”

In 2016, Sarah was one of hundreds of human moderators working for a third-party agency in California.

Her job was to review complaints of inappropriate content, as flagged by Facebook’s users.

She shared her experience with BBC Radio 5 live’s Emma Barnett.

“They capped us on spending about one minute per post to decide whether it was spam and whether to remove the content,” she said.

“Sometimes we would also remove the associated account.

“Management liked us not to work any more than eight hours per day, and we would review an average of about 8,000 posts per day, so roughly about 1,000 posts per hour.

“You pretty much learn on the job, specifically on day one. If I had to describe the job in one word, it would be ‘strenuous’.

Illegal images

“You definitely have to be prepared to see anything after just one click. You can be hit with things really fast without a warning.

“The piece of content that sticks with me was a piece of child pornography.

“Two children – the boy was maybe about 12 and the girl about eight or nine – standing facing each other.

“They weren’t wearing pants and they were touching each other. It really seemed like an adult was probably off camera telling them what to do. It was very disturbing, mostly because you could tell that it was real.

Reappearing posts

“A lot of these explicit posts circulate. We would often see them pop up from about six different users in one day, so that made it pretty challenging to find the original source.

“At the time there was nothing in the way of counselling services. There might be today, I’m not sure.”

Sarah says she would probably have taken up counselling if it had been offered.

“They definitely warn you, but warning you and actually seeing it are different.

“Some folks think that they can handle it and it turns out they can’t, or it’s actually worse than they expected.”

Graphic violence

“You become rather desensitised to it over time. I wouldn’t say it gets any easier but you definitely do get used to it.

“There was obviously a lot of generic pornography between consenting adults, which wasn’t as disturbing.

“There was some bestiality. There was one with a horse which kept on circulating.

“There’s a lot of graphic violence, there was one when a woman had her head blown off.

“Half of her body was on the ground and the torso upwards was still on the chair.

“The policy was more stringent on removing pornography than it was for graphic violence.”

Fake news

“I think Facebook was caught out by fake news. In the run-up to the US election, it seemed highly off the radar, at least at the time I was working there.

“I really cannot recall ever hearing the term ‘fake news’.

“We saw a lot of news articles that were circulating and reported by users, but I don’t ever recall management asking us to browse news articles to make sure that all the facts were accurate.

“It’s very monotonous, and you really get used to what’s spam and what’s not. It just becomes a lot of clicking.

“Would I recommend it? If you could do anything else, I would say no.”

Facebook responds

The BBC shared Sarah’s story with Facebook.

In response, a Facebook spokesman said: “Our reviewers play a crucial role in making Facebook a safe and open environment.

“This can be very challenging work, and we want to make sure they feel properly supported.

“That is why we offer regular training, counselling, and psychological support to all our employees and to everyone who works for us through our partners.

“Although we use artificial intelligence where we can, there are now over 7,000 people who review content on Facebook, and looking after their wellbeing is a real priority for us.”

Taiwanese police give cyber-security quiz winners infected devices

Police have apologised after giving infected memory sticks as prizes in a government-run cyber-security quiz.

Taiwan’s national police agency said 54 of the flash drives it gave out at an event highlighting a government’s cybercrime crackdown contained malware.

The virus, which can steal personal data and has been linked to fraud, was added inadvertently, it said.

The Criminal Investigation Bureau (CIB) apologised for the error and blamed the mishap on a third-party contractor.

It said 20 of the drives had been recovered.

Around 250 flash drives were given out at the expo, which was hosted by Taiwan’s Presidential Office from 11-15 December and aimed to highlight the government’s determination to crack down on cybercrime.

Cyber-fraud ring

All the drives were manufactured in China but the CIB ruled out state-sponsored espionage, saying instead that the bug had originated from a Taiwan-based supplier.

It said a single employee at the firm had transferred data onto 54 of the drives to “test their storage capacity”, infecting them in the process.

The malware, identified as the XtbSeDuA.exe program, was designed to collect personal data and transmit it to a Polish IP address which then bounces it to unidentified servers.

The CIB said it had been used by a cyber-fraud ring uncovered by Europol in 2015.

Only older, 32-bit computers are vulnerable to the bug and common anti-virus software can detect and quarantine it, it said.

The server involved in the latest infections had been shut down, it said.

In May, IBM admitted it had inadvertently shipped malware-infected flash drives to some customers.

The computer maker said drives containing its Storwize storage system had been infected with a trojan and urged customers to destroy them.

At the time, it declined to comment on how the malware ended up on the flash drives or how many customers had been affected.

The trojan, part of the Reconyc family, bombards users with pop-ups and slows down computer systems.

It is known to target users in Russia and India.