Facebook’s having a right old time of it at the moment. It’s almost enough to make you feel sorry for Mark Zuckerberg and his team.

Almost – but not quite.

Yesterday I read that Facebook has closed 583 million fake accounts in the first three months of 2018 and moderated 2.5m pieces of hate speech and 3.4m pieces of graphic violence.

While I’m glad the company is making steps forward, how on earth did it get to this point?

“This is the start of the journey and not the end of the journey and we’re trying to be as open as we can,” said Richard Allan, Facebook’s vice-president of public policy for Europe, the Middle East and Africa.

That’s a bit of an understatement, and maybe reflects part of the problem – although Facebook has been running since 2004 it feels like it’s only just waking up to its responsibilities as one of the world’s biggest tech companies.

Case in point – it’s also come out this week that the data of another 3 million Facebook users have also been left exposed in a data leak that had embarrassingly poor security.

Between this and the recent Cambridge Analytica scandal the social media giant needs to take a good look at itself.

Facebook is synonymous with the rise of social media, and as a pioneer in the field it’s understandable that it won’t have it right first time but it also shows how important it is to have a vision of what needs protecting and what’s going to be a problem in future.

Facebook is dealing with the fallout of Cambridge Analytica in part by pulling 200 apps that all dealt with large amounts of user information before it limited access in 2014.

By this point the site had been running for a decade, so it’s a bit worrying that this is when they started putting more robust safeguards in place.

As other technologies continue to emerge it’s important that other companies don’t ‘do a Facebook’ and get ahead of themselves without putting the user's welfare at the forefront.

Protecting user data has to be a priority and with GDPR coming everyone’s particularly sensitive to it at the moment.

However other emerging technologies such as Blockchain, Big Data and AI also need to be used with caution, although in theory they could be a step in the right direction.

For example, Facebook’s using AI to help it tackle inappropriate content and says it’s having some good results, but it needs to be careful it’s not just pulling in a bigger, scarier problem to eat the smaller one.

Last week Sara Simeone also told BusinessCloud that Facebook’s interest in Blockchain could be because the decentralised, transparent tech offers a very subtle way to show users it’s taking transparency more seriously.


As so many businesses look to innovative technologies there’s amazing potential but they also need to make sure they understand the long term impact of using them.

At the end of the day I think Facebook will probably be just fine – it reported a 63 per cent rise in profit and an increase in users in its quarterly results in April despite Cambridge Analytica.

It’s just a bit baffling that one of the most sophisticated tech companies in the world ends up with 583m fake accounts on its site and the personal information of its users exposed - and these are just the issues we know about.

Looking back at Zuckerberg’s reported comment about Facebook's first few thousand users being (in less polite language) idiots for trusting him with their data back in 2004 it looks like the social media giant may have made progress with its tech but not with its core values.

What do you think – could Facebook have done it any differently?

I’ll be focusing on social media for the BusinessCloud team so please email your thoughts and social media stories to katherine.lofthouse@businesscloud.co.uk .