Note: Human Computation and Cognitive Intelligence Tasking
The problem of managing the vast amount of data generated on the web, particular emerging e-commerce services, was a practical issue for firms working in these businesses. One solution was developed by Amazon – the Amazon Mechanical Turk (AMT), that established, and demonstrated a number of novel concepts – low-level data work that is need in ecommerce sites, such as spell checking link checking, and image tagging, that could not be done by computers, could be parcelled out to an unlimited number of individuals with internet access, who could execute these tasks with minimal supervision, and paid per item of 'micro'work, or ‘Human Intelligence Tasks’ (HIT) rather than using a more conventional corporate outsourcing service or large or on-going data tasks. However, instead of keeping this in as an in-house service, the interface, and the ‘crowd’ that had been recruited to work were opened to any users who wanted to set it a task. This not only became a commercial service, but the basis of an exploration of how distributed microwork systems that sent tasks to unknown workers could possibly be used to deliver a reliable service for business, science or any other type of user. It also spawned a new sector and a new form of work, often known generically as ‘Pay to Click’.
However, it is actually far from straight forward to turn a real task into microtasks to be distributed to unknown workers, and get back reliable results. Luis Van Ahn, who developed the CATCHA system as a way of telling humans and computers apart on web page forms, realised that he could turn the problem around, and use the popular security system used on websites to get the web using community to do useful work for free – transcribing bits of text unreadable by computer, everything they signed up for a webservice, or as part of a game. This approach, dubbed ‘Human Computing’ laid the basis for a programme of research on how to integrate a ‘crowd’ of human workers into the logic of computation, where they could be addressed and directed on demand, as if they were part of that computer, to do tasks that one day might be done by computer (Quinn and Bederson 2011). The tasks that allocated to people are sometimes refered to as ‘Cognitive Intelligence Tasks’.
Now computational approaches could be developed to turn complex tasks into simple ones, to allow computers to check the work of people, and people to check the work of computers to ensure quality. Common techniques in human computing include giving the same task to two people and comparing the results, first trying different computer algorithms, then if they do not agree, escalate to a human being etc. Human computing systems also allow the operationalization of ideas of distributed collective intelligence – aggregating or averaging the input of many people to find the optimum solution to a problem.
Just as tasks have to be adapted to computation resources, tasks in human computing systems have to be matched to the people with minimum appropriate skills. Most human computing projects require the testing and training of individuals workers before they can work on a project.
Today there is a whole range of different approaches used in practice. AMT pays workers, Crowdflower also embeds tasks in games, and pays people in virtual game credits, Amazon and ecommerce sites get people to provide ratings and reviews for free, and every time we search on Google, we are providing work to improve the system.