You are on page 1of 4

NFL 1st and Future - Impact Detect helmet impacts in videos of NFL plays

Detection

The National Football League (NFL) has teamed up with Amazon Web Services (AWS) to develop the
“Digital Athlete,” a virtual representation of a composite NFL player that the NFL can use to model game
scenarios to try to better predict and prevent player injury. The NFL is actively addressing the need for a
computer vision system to detect on-field helmet impacts as part of the “Digital Athlete” platform, and the
league is calling on Kagglers to help.

In this competition, you’ll develop a computer vision model that automatically detects helmet impacts that
occur on the field. Kick off with a dataset of more than one thousand definitive head impacts from
thousands of game images, labeled video from the sidelines and end zones, and player tracking data. This
information is sourced from the NFL’s Next Gen Stats (NGS) system, which documents the position,
speed, acceleration, and orientation for every player on the field during NFL games.

This competition is part of the NFL’s annual 1st and Future competition, which is designed to spur
innovation in athlete safety and performance. For the first time this year, 1st and Future will be broadcast
in primetime during Super Bowl LV week on NFL Network, and winning Kagglers may have the
opportunity to present their computer vision systems as part of this exciting event.

If successful, you could support the NFL’s research programs in a big way: improving athletes' safety.
Backed by this research, the NFL may implement rule changes and helmet design improvements to try to
better protect the athletes who play the game millions watch each week.

The National Football League is America's most popular sports league. Founded in 1920, the NFL
developed the model for the successful modern sports league and is committed to advancing progress in
the diagnosis, prevention, and treatment of sports-related injuries. Health and safety efforts include support
for independent medical research and engineering advancements as well as a commitment to work to better
protect players and make the game safer, including enhancements to medical protocols and improvements
to how our game is taught and played. For more information about the NFL's health and safety efforts,
please visit NFL.com/PlayerHealthandSafety.

Dataset Description
In this competition, you are tasked with identifying helmet collisions in video files. Each play has two
associated videos, showing a sideline and endzone view, and the videos are aligned so that frames
correspond between the videos. The training set videos are in train with corresponding labels
in train_labels.csv, while the videos for which you must predict are in the test folder.
To aid with helmet detection, you are also provided an ancillary dataset of images showing helmets with
labeled bounding boxes. These files are located in images and the bounding boxes in image_labels.csv.
This is a code competition. When you submit, your model will be rerun on a set of 15 unseen plays located
in the same test location. The publicly provided test videos are simply a set of mock plays (copied from the
training set) which are not used in scoring.
Note: the dataset provided for this competition has been carefully designed for the purposes of training
computer vision models and therefore contains plays that have much higher incidence of helmet impacts
than is normal. This dataset should not be used to make inferences about the incidence of helmet impact
rates during football games, as it is not a representative sample of those rates.

Files
[train/test] mp4 videos of each play. Each play has two copies, one shot from the endzone and the other
shot from the sideline. The video pairs are matched frame for frame in time, but different players may be
visible in each view. You only need to make predictions for the view that a player is actually visible in.
train_labels.csv Helmet tracking and collision labels for the training set.
 gameKey: the ID code for the game.
 playID: the ID code for the play.
 view: the camera orientation.
 video: the filename of the associated video.
 frame: the frame number for this play.
Note: The Sideline and Endzone views have been time-synced such that the snap occurs 10 frames into the
video. This time alignment should be considered to be accurate to within +/- 3 frames or 0.05 seconds
(video data is recorded at approximately 59.94 frames per second).

 label: the associate player's number.


 [left/width/top/height]: the specification of the bounding box of the prediction.
 impact: an indicator (1 = helmet impact) for bounding boxes associated with helmet impacts
 impactType: a description of the type of helmet impact: helmet, shoulder, body, ground, etc.
 confidence: 1 = Possible, 2 = Definitive, 3 = Definitive and Obvious
 visibility: 0 = Not Visible from View, 1 = Minimum, 2 = Visible, 3 = Clearly Visible
For the purposes of evaluation, definitive helmet impacts are defined as meeting three criteria:

 impact = 1
 confidence > 1

 visibility > 0

Those labels with confidence = 1 document cases in which human labelers asserted it was possible that a
helmet impact occurred, but it was not clear that the helmet impact altered the trajectory of the helmet.
Those labels with visibility = 0 indicate that although there is reason to believe that an impact occurred to
that helmet at that time, the impact itself was not visible from the view.

sample_submission.csv A valid sample submission file.


 gameKey: the ID code for the game.
 playID: the ID code for the play.
 view: the camera orientation.
 video: the filename of the associated video.
 frame: the frame number for this play.
 [left/width/top/height]: the specification of the bounding box of the prediction.
images Still photo equivalents of the train/test videos for use making a helmet detector.
image_labels.csv contains the bounding boxes corresponding to the images.
 image: the image file name.
 label: the label type.
 [left/width/top/height]: the specification of the bounding box of the label, with left=0 and top=0
being the top left corner.
[train/test]_player_tracking.csv Each player wears a sensor that allows us to precisely locate them on the
field; that information is reported in these two files.
 gameKey: the ID code for the game.
 playID: the ID code for the play.
 player: the player's ID code.
 time: timestamp at 10 Hz.
 x: player position along the long axis of the field. See figure below.
 y: player position along the short axis of the field. See figure below.
 s: speed in yards/second.
 a: acceleration in yards/second^2.
 dis: distance traveled from prior time point, in yards.
 o: orientation of player (deg).
 dir: angle of player motion (deg).
 event: game events like a snap, whistle, etc.

You might also like