1
00:00:40,426 --> 00:00:43,437
Normally all of that is done offline.

2
00:00:17,837 --> 00:00:23,021
And slang and expressions can vary widely across cultures, languages and regions.

3
00:00:09,741 --> 00:00:17,837
The nuance of images, memes, and videos makes detecting hate speech even more difficult.

4
00:00:58,456 --> 00:01:03,820
by continuously learning from billions of real-world examples of both regular content

5
00:00:26,138 --> 00:00:29,615
we’ve developed two new artificial intelligence technologies:

6
00:01:06,169 --> 00:01:08,420
With RIO, we can direct our model training

7
00:00:43,437 --> 00:00:47,589
The model then goes online to see how well it performs in the real world.

8
00:00:38,215 --> 00:00:40,426
and selecting data for training.

9
00:00:05,228 --> 00:00:09,741
Protecting online communities from hate speech is incredibly challenging.

10
00:00:33,268 --> 00:00:38,215
To find hate speech, we build AI models by testing different neuro-architectures

11
00:00:49,089 --> 00:00:51,051
Reinforce Integrity Optimizer

12
00:00:51,051 --> 00:00:55,889
or RIO is a groundbreaking system for building and training AI models.

13
00:01:22,685 --> 00:01:24,119
To better understand language,

14
00:00:55,889 --> 00:00:58,456
RIO connects the online to the offline

15
00:01:37,481 --> 00:01:40,343
that makes these systems vastly more efficient.

16
00:01:03,820 --> 00:01:06,169
and instances of hate speech.

17
00:00:29,615 --> 00:00:33,268
Reinforce Integrity Optimizer and Linformer.

18
00:01:14,767 --> 00:01:19,213
has enabled us to take significantly more actions on hate speech violations

19
00:01:11,387 --> 00:01:14,767
Optimizing our hate speech detection models more effectively

20
00:01:08,420 --> 00:01:11,387
to the most challenging hate speech violations.

21
00:02:02,574 --> 00:02:06,793
have fundamentally advanced AI models helping us

22
00:00:24,226 --> 00:00:26,138
To help protect people from hate speech,

23
00:01:19,213 --> 00:01:21,248
than we could with our previous models.

24
00:01:40,343 --> 00:01:44,761
It achieves the same results with linear rather than geometric complexity.

25
00:01:44,761 --> 00:01:48,467
This helps us analyze long videos and other complex content.

26
00:01:54,114 --> 00:01:59,729
to detect hate speech before it spreads out of billions of posts checked every day.

27
00:01:50,302 --> 00:01:54,114
For facebook, that means near instant action taken on difficult

28
00:01:34,508 --> 00:01:37,481
Linformer is a new innovative AI architecture

29
00:01:29,134 --> 00:01:33,135
The amount of computations required grows at an unsustainable rate.

30
00:01:24,119 --> 00:01:29,134
AI models have grown bigger with hundreds of millions or billions of parameters.

31
00:01:59,729 --> 00:02:02,574
New technologies, RIO and Linformer,

32
00:02:06,793 --> 00:02:09,747
to further our commitment to keep our platforms as safe as possible.