Note: This form is for collecting cases.
But we're also looking for people/organisations to provide expert opinions on the cases we've collected - i.e. helping us work out what systemic risks actually are!
If you're interested, please contact marsh@algorithmwatch.org
AlgorithmWatch is using crowdsourced observations from online platforms and search engines to address the question - what do we think benchmarks for "systemic risks" should be?
Context
Under the EU’s Digital Services Act (DSA), very large online platforms and search engines are required to assess whether their design and functioning present “systemic risks”, such as increasing threats to human rights, to elections, or to safety and wellbeing.
We are building a knowledge base of cases which provide external evidence of whether platforms *have* effectively assessed and mitigated systemic risks, as required by the DSA.
We are therefore asking for submissions of real cases which may show evidence of systemic risks online. If you are aware of such a case, please submit information below.
You could submit, for example, cases which show evidence of:
- Harmful content being repeatedly ‘pushed’ to users by algorithmic recommender systems.
- Systemic gaps, loopholes, or failures in moderation/enforcement systems which may increase risks to users.
- Content which threatens elections, human rights, or health, with a viral spread that may be supported by features of the platforms.
We are also interested in cases which may show evidence of risks being successfully mitigated.
You don't need to have done research/observations yourself - articles, reports etc. of other peoples' observations are welcome, when relevant.
Cases you submit must have occurred since August 2023, and relate to specific risks within the EU. Further specifications are listed below and more information is available on request: marsh@algorithmwatch.org.
You may stay anonymous, or leave details allowing us to credit your contributions.