New Commercial Cleaning Standards in Elmira NY, 14901,14902,14903,14904,14905 -ACS Facility Services

New Commercial Cleaning Standards in Elmira NY, 14901,14902,14903,14904,14905 -ACS Facility Services

AI HAS STARTED CLEANING UP FACEBOOK, BUT CAN IT FINISH?

 
HOTLITTLEPOTATO

IN THE EARLY hours of Aug. 25, 2017, a ragged insurgent group from Myanmar’s Rohingya Muslim minority attacked military outposts in the country’s northwest, killing 12 people. Security forces quickly retaliated with a campaign of village burning and mass killings that lasted weeks. As Rohingya died by the thousands, Myanmar’s military leaders took to Facebook.

A post from the commander-in-chief pledged to solve “the Bengali problem,” using a pejorative for Rohingya in Myanmar. Another general wrote to praise the “brilliant effort to restore regional peace,” observing that “race cannot be swallowed by the ground but only by another race.” A UN fact-finding report on the violence later cited the commander-in-chief’s post as suggestive of genocide, and noted the history of Facebook posts whipping up hate against Rohingya in Myanmar. The mission’s chair told journalists that the site had played a “determining role” in the crisis.

In the US Capitol in April, Senator Jeff Flake asked Facebook CEO Mark Zuckerberg how his company might have avoided that role. The impassive then-33-year-old billionaire noted that he had hired more Burmese speakers. Then he expounded on a favorite topic—artificial intelligence. “Over the long term, building AI tools is going to be the scalable way to identify and root out most of this harmful content,” he said. During two days of congressional hearings, Zuckerberg mentioned AI more than 30 times. It would, he told lawmakers, fight fake news, prevent ads that discriminate on the grounds of race or gender, and hobble terrorist propaganda.

Facebook has faced a dizzying series of accusations and scandals over the past year. They include enabling Russian election interference and employment discrimination, in addition to being accessory to genocide in Myanmar. Monday, a Senate report said Russia's activities on Facebook properties were far greater than previously known, and suggested the the company misled Congress by downplaying the idea that Russian trolls used its product to suppress turnout during the 2016 presidential election.

Many of Facebook’s apologies exhibit a common theme: Artificial intelligence will help solve the problems incubating on the company's platform. Mike Schroepfer, the company’s chief technology officer, says the technology is the only way to prevent bad actors from taking advantage of the service. With 2.3 billion regular users, having everything reviewed by humans would be prohibitively expensive—and creepy. “I think most people would feel uncomfortable with that,” Schroepfer says, eliding the possibility users may find it creepy to have algorithms review their every post. “To me AI is the best tool to implement the policy—I actually don't know what the alternative is.”

 
 
Facebook CTO Mike Schroepfer
PATRICIA DE MELO MOREIRA/AFP/GETTY IMAGES

Counting on AI is a gamble. Algorithms have proved capable of helping to police Facebook, but they are far from a cure-all—and may never be. The company has had great success in detecting and blocking pornography and nudity. But training software to reliably decode text is much more difficult than categorizing images. To tamp down harassment, hate speech, and dangerous conspiracy theories across its vast platform, Facebook needs AI systems capable of understanding the shifting nuances of more than 100 different languages. Any shortfalls must be caught by Facebook’s roughly 15,000 human reviewers, but at the social network’s scale it’s unclear how manageable their workload will be. As events in Myanmar showed, gaps in the enforcement net that may look small from Menlo Park can feel dangerously large to people whose world is being shaped by Facebook.

Flesh Detector

Facebook’s push to automate its content moderation started on the initiative of an ad executive, not an expert in online discourse. Tanton Gibbs was hired as an engineering director in 2014 to work on ad technology, as he had previously at Microsoft and Google. After hearing about Facebook’s moderation challenges, he suggested a more algorithms-first approach. Facebook had adopted a tool called PhotoDNA developed by Microsoft and Dartmouth College to block known child exploitation images, but wasn’t deploying image-analysis software or AI more broadly. “They were strictly using humans to review reports for things like pornography, hate speech, or graphic violence,” says Gibbs. “I saw we should automate that.” Facebook put Gibbs at the head of a new team, based in Seattle, known initially as CareML.

The new group quickly proved its worth. Gibbs and his engineers embraced a technology called deep learning, an approach to training algorithms with example data that had recently become much more powerful. Google showed the power of the technology when it developed software that learned to recognize cats. More quietly, Gibbs’ group taught deep learning algorithms to recognize pornography and nude human beings. Initially that software reviewed images flagged by Facebook users. After a year and a half, Gibbs got permission to let his systems flag newly submitted content before anyone reported it. Facebook says 96 percent of adult and nude images are now automatically detected, and taken down, before anyone reports them.

That’s still a lot of nude flesh slipping past Facebook’s algorithms. The company says it took down 30.8 million images and videos of nudity or sexual activity in the third quarter of 2018; that means the algorithms didn’t catch 1.3 million such images. In fact, Facebook estimates that the percentage of views with nudity or sexual content nearly doubled over the 12 months ending in September, to about 9 in every 10,000 views. “More nudity was posted on Facebook, and our systems did not catch all of it fast enough to prevent an increase in views,” Facebook said in its most recent community standards enforcement report. How much was posted and seen but not detected or reported is unknowable.

 
 

Still, the success of Gibbs’ project in fighting pornography has become a favorite talking point of Facebook executives touting the potential of AI to clean up their service. It’s working proof of the idea that an algorithmic immune system can help shelter Facebook users from harmful content—and the company from the consequences of hosting it. Facebook says that just over half of hate speech removed from the platform in the most recent three months was flagged first by algorithms, more than double the proportion earlier in the year. Some 15 percent of posts removed for bullying are identified and taken down before anyone has reported them. In neither case, though, do the algorithms remove the post; the programs flag the posts to be reviewed by people.

Facebook’s challenge is getting its technology to work well enough that its roughly 15,000 human reviewers can reliably pick up the slack, in each of the more than 100 countries and languages the service is used. Getting its hate speech and bullying detectors close to the effectiveness and autonomy of its porn filters will be particularly difficult.

Deep learning algorithms are pretty good at sorting images into categories—cat or car, porn or not porn. They’ve also made computers better with language, enabling virtual assistants like Alexa and significant jumps in the accuracy of automatic translations. But they’re still a long way from understanding even relatively simple text in the way humans do.

Decoding Language

To understand whether a post reading “I’m going to beat you” is a threat or a friendly joke, a human reviewer might effortlessly take into account whether it was paired with an image of a neighborhood basketball court, or the phrasing and tone of earlier messages. “How a model could use context in that way is not understood,” says Ruihong Huang, a professor at Texas A&M University. She helped organize an academic workshop on using algorithms to fight online abuse this fall, at one of the world’s top conferences for language processing research. Attendance and the number of papers submitted roughly doubled compared with the event’s debut in 2017—and not because researchers smelled victory. “Many companies and people in academia are realizing this is an important task and problem, but the progress is not that satisfying so far,” says Huang. “The current models are not that intelligent in short, that’s the problem.”

 

Srinivas Narayanan, who leads engineering in Facebook’s Applied Machine Learning group, agrees. He’s proud of the work his team has done on systems that can scan for porn and hate speech at huge scale, but human-level accuracy and nuance remains a distant hope. “I think we’re still far away from being able to understand that deeply,” he says. “I think machines can eventually, but we just don’t know how.”

Fill Out Form
Call Now for free money saving articles and videos !!!!!!!!!!!!!!!!