All Their Painted Faceshttps://i0.wp.com/barrenmagazine.com/wp-content/uploads/2019/03/Barren-Urban-Decay-16.jpg?fit=1920%2C1280&ssl=119201280Vanessa TaylorVanessa Taylorhttps://i0.wp.com/barrenmagazine.com/wp-content/uploads/2019/03/vanessa-taylor.jpg?fit=96%2C96&ssl=1
Channels play the same clips over and over again. They never start from the beginning, a girl on asphalt, nor from the beginning’s beginning, knowing the girl is Shahida and it was the day before her seventeenth birthday. They never start with footage from the AIs that later hover over her, both blood and the dawn reflecting off their home displays, nor with the internal programming confusing Shahida for a twenty year old man who stole a package of $20 diapers from the store months ago. To them, anyone dark enough is interchangeable and there is no difference between the khimar around her head or the winter scarf he wore. Starting at any of those points would create sympathy. So instead they skip to long after the block watched as her body was unceremoniously shoved into the back of a truck and the AI lovingly loaded into their own. The channels start this day with the sun in the west.
“Because of the new Rekognize program, police were able to identify almost all rioters immediately,” the newscasters say, flashing the same bright smile across channels. Behind them, clips play. Some show protesters facing off against police made of metal or flesh. Others focus on American flags curling in on themselves as they are reduced to ash. They are afraid of how people dance around the remains. The drumbeats in the background make one newscaster swallow nervously, fingers tapping against a glass of water. People know when a language is being spoken, even if the beat is unfamiliar, and they can understand a threat.
“Unlike previous programs, Rekognize’s facial recognition technology can reliably identify subjects even if their faces are obscured,” the newscasters are salesmen, too. They speak in taglines and ignore how less than fifty percent accuracy shouldn’t be confused with reliability. “All the program needs are the wrinkles at the corner of your eyes,” a newscaster jokes, pointing at their co-worker. “Rob, I think they’d find you pretty quick!”
Faces flash across the screens behind them with names trailing along the bottom. The channels’ camera mimics the software. First, it zooms on a woman whose bare face is contorted into a grimace as she shatters an AIs home display with a rock: Fatima Ishmael, Battery Against A Police Officer. Next, a woman who wears a ski mask that leaves only her dark eyes visible as she charges the garage where the AIs are held: Chantel Wilson, Inciting A Riot. Then, a man with his shirt tied around the bottom half of his face who is tackled and knocked out when his head hits the ground: Jamal Thomas, Resisting Arrest. It is a live commercial. All Rekognition needs are your eyes, a unique tilt to your mouth, the flash of your teeth, your fingers held at the right angle, and it will find you — forty percent of the time. But a girl laid beneath the rising sun with her arms bent like a bird pushed too early from her nest, shattering when she hit the ground, is hardly a price to pay.
Cameras switch back to their respective newscaster. Their saccharine smiles are replaced by solemn tones, “On the ground reporting tells us the programming is having difficulty identifying some key subjects. If you recognize the people you are about to see, the head of police urges viewers to call their tip line. These people are dangerous. Viewer discretion is also advised.”
The channels transition. If the cameras started at the beginning, the figures would surround Shahida on the ground before the AI drove them off. They would write WE SAW WHAT YOU DID HERE beneath blood stains after she was packed away, faces tilted towards the cameras mounted on light posts. The beginning is sympathy, so the cameras start downtown. The figures are all concealed head to toe. Their faces are mostly uncovered, but their faces are not truly theirs. All technology can be tricked. They have painted: multiple eyes, all blinking as they move across the screen; gaping mouths on their throats, opening and closing every time they swallow. Their faces are broken into segments that the camera cannot grasp. It cannot focus on what it cannot recognize. The software cannot complete its job.
One channel plays too far. Screens are filled with a girl in a stained khimar who has hands spreading across her face, seeming to rip her apart from the seams. She stands on top of an overturned police car, holding the removed arm of an AI. “Shahida was my friend,” her scream is distorted. The programming is not trained in audio identification, anyway. She points at the building before her, “We saw what you did.” She has multiple mouths, black expanses without teeth. As she speaks, she looks into the cameras, “We are coming in,” and all of them scream.
The channel abruptly stops the video. The newscaster is unprepared to be back on camera so soon. There’s a high flush on his cheeks and the sweetness of his smile is artificial. He tries to laugh it off, wiping one hand across the sweat gathering on his brow. “Geez,” he chuckles, looking into the camera as it quietly focuses on him, “Wasn’t that creepy?”
Vanessa Taylor is a writer currently based in Philadelphia, although the Midwest will always be home. She has work in outlets such as Teen Vogue, PAPER Magazine, and Catapult. Her work focuses on exploring Black Muslim womanhood and the taboo. You can follow her across social media at @bacontribe.