A couple of weeks ago, on and around August 10th there was yet another long-running argument being conducted on Twitter between anti-vaccine supporters on the one hand and autistic advocates on the other.
You’re probably thinking, “So what?” These arguments go on almost constantly and what does it matter? Well, events this time transpired differently when one of the participants tweeted a strobing GIF to somebody who had earlier identified themselves as having photosensitive epilepsy.
I was participating in that heated discussion. The flashing GIF was tweeted to a very dear friend of mine and came within a whisker of triggering a seizure. The person who sent it followed up by asking if the recipient was dead: it was absolutely clear that they intended to cause harm.
It’s fairly widely known that such GIFs can trigger seizures in susceptible individuals and that such seizures can be life-threatening. That is why a court in Texas accepted earlier this year that when used in this way a strobing GIF image is classified as a dangerous weapon.
The earlier incident that led to this court ruling was widely reported in the media around the world, such as in this report from a UK daily national newspaper. So there is no question that the potential of these animated images to cause harm is public knowledge and has been for some time.
It is also the case that broadcasters in the UK (and I assume in other countries around the world) are prohibited from broadcasting dangerous flashing video images, and required to issue warnings where this is unavoidable, such as in news broadcasts involving flash photography. [Ofcom Guidance Notes (PDF)]
I believe it’s time for Twitter to be held similarly responsible for preventing such assaults against their users, or at the very least taking reasonable steps to protect their users from harm.
As was noted in the Texas District Court filing, these animated GIFs play “automatically when the tweet [is] viewed”. The combination of being able to embed such flashing animations in a tweet along with the knowledge that they will be shown without the need for the recipient to take any action makes them exceedingly dangerous to susceptible people.
Even without malicious intent they pose a potentially deadly risk. But we know they are being used deliberately with the intention of harming the recipient. Why is Twitter not taking action to protect vulnerable users of its platform? It’s only luck that deaths have been avoided so far: is that what it will take before Twitter does the right thing?
Please, Twitter. You have the technical expertise to make this happen, to protect your users with photosensitive epilepsy from this very real risk of harm or death. The ball is in your court. I hope you will do the right thing.
Alexandra Forshaw (@myautisticdance)