What is wrong with the world?

“What is wrong with the world?” – that is a question I’ve been coming back to in the last few days. Whenever I read about some problem that stirs a lot of discussion, arguments and – all too often – anger, I wonder: Don’t we have more important problems to solve?

The more time I spend thinking about “figuring out what problems are important”, the more difficult it seems to get to actually answer that question. There’s so many problems, so many contexts, so many scopes, to consider – how would you ever be able to rank those?

One idea to approach: To make well-structured list that forces to gather certain data to help answer interesting questions. So far I’ve came up with the following fields:

  • Abstract – a free form description of the problem, preferably very succinct for quick digestion
  • Scope – Where does this problem apply geographically? A region, country, continent, world?
  • Context – In which context does this problem apply? Something like health, gaming, consumerism
  • References – where can I read more about this?
  • Rating, within the given Scope and Context

The Rating is probably the most problematic, even if restricted to Scope and Context. Maybe a way to approach that is to order just relative to other problems within it’s category. “X is more important than Y, less important than Z”.

Handling scope and context properly requires a categorisation system that helps avoid duplicates and ambiguity. The ranking system would also need some way to gather input from multiple sources, then presenting an average or mean to the viewer.

One interesting datasource for the ranking might be insurance data: Especially big reinsurance companies know quite well what events in the world cause the most damage, since they insure insurers that insure more or less everything. Of course the costs that they insure against are very specific and will ignore a lot of problems that don’t have a direct cost attached, say, global warming.

Maybe one day I’ll put together a prototype for this. If you have any interest in that and would like to chat, let me know.

Update 1:

Chris Bannon suggests to use a logarithmic scale, like the Richter magnitude scale. “You would measure the energy released (aka impact).” I like the idea, since a linear scale would not be effective at capturing a wide range like this.

When reading about the Richter scale, I learned that while this is still referenced a lot, the actual scale in use for measuring earthquakes is the moment magnitude scale (MMS). This has been in use in most countries since the mid-20th century. Though to make adoption easy, they adjusted the formula with some constants to yield the same scale as the Richter scale, only replacing the underlying formula, but not the output range.

That seems like a useful concept as well: Ranking problems from 0 to 10, on a logarithmic scale, is one thing. How to actually calculate the ranking another, independent of the output. That way the ranking can be adjusted again and again, without affecting the output scale.

Update 2:

My friend Enes, via email, brings up two interesting points, which I’d like to reproduce here, with some of my interpretation:

  1. Who’s the audience for such ranking system? An “average Joe” would judge issues quite differently than someone with active political involvement, as one of many potential backgrounds.
  2. Every culture will rate issues differently. This might depend on geographical location, on language, on status, whatever contributes to cultures. In order to make things comparable, a rating likely needs to involve the culture from where it comes.

Including audience as an additional field could tell us who’s supposed to be interested in this problem. Culture might be more useful as a property of the rating, though figuring out the culture of a voter is probably itself a pretty hard problem to solve.

Update 3, in April 2014:

After reading parts of Thinking, Fast and Slow, I’ve been coming back to this idea again. Our inability to intuitively deal with statistics seems to cause so much misunderstanding and violated expectations. For example, we are pretty bad at rating risk for different forms of travel, where we’re much more likely to get hurt or killed in a car than in a plane. Based on this unintuitiveness of statistics I’d extend the ideas outlined above with an integration of relevant statistics. While its easy to mislead readers with statistics, I think having verifiable statistics available would still be much better than having no statistics at all.

-Jörn