4

The unintended consequences of tech solutions

Kathryn Dingle

This week I have been doing a lot of thinking about the unintended consequences of UX design and technology. This is something that has come up quite a bit lately. I believe we should think about it when designing anything, but I am particularly geeking out about it around technology.

Design and technology can be really helpful to solve a problem facing users or companies, but often we don’t stop to think about the knock on effects of what this could mean in the future or for others in the system. We should look to preempt the impact of our changes. What could happen with this technology? How could it be exploited? What other problems could your solution cause?

However, to make things even more complicated, it isn’t until you put a product or service in the hands of a user, that you will truly understand its uses. As a user you might not see the connections between the unintended consequences and the design changes. You may see them day-to-day in the things that frustrate you when interacting with products and services; or you may even enjoy the features, despite them being detrimental to things such as your productivity.

Some of these unintended consequences can be mitigated by having a diverse range of users and stakeholders involved throughout the design process. We should be looking at who is doing the designing as well as the product itself. We need to consider any bias or discrimination from individuals or data sources involved in the design process.

Here are few examples of design changes which had huge unintended consequences for the world:

Aza Raskin is the creator of ‘infinite scroll’ online

Earlier this year he publicly apologised for creating this functionality. This is seen as a huge design mistake, but users are no longer triggered to stop viewing things online and so they carry on using it. This has a great impact on the use of applications such as Instagram and Facebook, but Aza had no idea how much impact this functionality would on how we interact with information online and how many hours of users lives it would take it.

Aza has said: ‘It is no longer as a designer to think about the constraints of just one individual using my product. Instead we have to think the fractual version up, to technology, society interaction.’

Find out more about the design of Instagram and learning from unlimited scrolling in the ‘Abstract: The Art of Design Netflix series – episode ‘Ian Spalter: Digital Product Design’

Ethan Zuckerman the creator of internet ‘pop ups’

Ethan created pop ups to distance advertising from the user generated content on user Tripod.com websites, in order to protect brand recognition through a degree of separation and so that the ad doesn’t interfere with users website content. He had no idea how these would be exploited and the disruption and annoyance this has caused so many users, through organisations adapting pop ups to make it near impossible for users to get rid them or to share explicit content.

Ethan has also publicly apologised, with a really interesting essay for The Alantic. In his essay he summarised the process as,

“At the end of the day, the business model that got us funded was advertising. The model that got us acquired was analyzing users’ personal homepages so we could better target ads to them. Along the way, we ended up creating one of the most hated tools in the advertiser’s toolkit: the pop-up ad.”

Find out more about this mistake on the Reply All podcast.

What are we doing about this in MBL?

When thinking about what other problems could our solution could cause, we are currently thinking very high-level about the areas we are exploring. We are currently exploring how technology might be able help match young people with services that they currently don’t know exist within the charity sector. But can the charity sector meet this demand if we solve it with ever decreasing funding sources? What can we do to mitigate this?

This is why we are also exploring how technology can increase the capacity of the sector. One idea was a ‘virtual youth worker.’ Youth workers are so important in many young people’s lives and we would never want to remove them. We are however, interested in whether there are elements of their work which could be automated through technology, so that we could free up their time to do more work in the areas that need to be done face-to-face.

Going forward, we will be exploring the unintended consequences of our work throughout the design process. Watch this space to see what we find and how we mitigate these challenges.

Make sure you are looking at:

  • how your technology could be used and abused and what new problems you could be creating whilst making a digital solution.
  • *who* is doing the designing as well as what is being designed and how (Thank you for the comment Katie!)

UPDATE: How can you do this in practice?

Doteveryone have done lots of thinking in this area and have created manual for how teams can explore the intended and unintended consequences of technology, as part of the design process. Here is a manual for running an event on this with your team. 

This is an evolving area of thinking for me and I would love to hear about any unintended consequences you have seen from tech or any learning you have about to think this through!

Comments 4

  1. Great blog! The unintended consequences of AI and algorithmic decision-making are also worth looking at. For example, algorithms used for hiring people or determining credit scores can have bias and discrimination built into them, which is difficult to interrogate as there is little transparency about how decisions are reached. This highlights wider issues about cognitive bias in tech design processes: https://www.wired.com/story/the-real-reason-tech-struggles-with-algorithmic-bias/ One key take-away is that it’s important to look at *who* is doing the designing as well as what is being designed and how.

    1. Oh yes, this is a really interesting area, especially in criminal justice at the moment because the MOJ have recently released an new algorithm tool for assigning prisoners to suitable prisons and there are concerns about bias!

      Absolutely I will explore this more and I completely agree with your key takeaway – I will add this in!

    1. Post
      Author

      Hi Hannah,

      It is great to hear you have done lots of thinking in this space. The guide looks fantastic. I will share this with the team and look at how we can incorporate this into our design sprints in February next year (watch this space as I will post about my learning!)

      I will add your guide to the post so nobody misses it.

Leave a Reply

Your email address will not be published.