Building the Next Generation of Factories
Be a part of the Project

Blog

A pedagogical intervention in production

posted by: Mike Lakoju date: Nov 24, 2020 category: Chatty Blog comments: Comments Off on A pedagogical intervention in production

Written and based on a theoretical and pedagogical framework by Rhiannon Firth (University of Essex)

With technical input from Mojtaba Ahmadiehkhanesar (University of Nottingham)

Drawing on knowledge co-produced in a workshop with Mojtaba Ahmadiehkhanesar (University of Nottingham) and David Branson III (University of Nottingham)

Project funded by the Engineering and Physical Sciences Research Council (EPSRC) Grant number: EP/R021031/1 Title: Chatty Factories: New Industrial Systems in Manufacturing

Background: AI, Social Robots and Co-bots

Companies and organisations are often minded to think about the social purposes of innovation.  In particular, the ethics of innovation when introducing new technologies are important. The proliferation of social and industrial robots is leading to concerns amongst governments and enterprises about the social implications for the future of work, skills and learning.

How humans and machines physically learn, work and interact together is also a hot topic. It has recently been announced that ‘social robots’ will be used to help reduce loneliness in UK care homes. The machines can learn about residents’ interests, hold simple conversations and mimic human gestures, and even teach them languages. While they been shown to improve mental health, it is also argued that they are an unsatisfactory replacement for human interaction and deep social and kinship ties in a fragmented society and overstretched care system.

Will future robots reduce drudgery, or displace human labour? (Shutterstock)

In manufacturing, it is often assumed that automation technology will replace workers or render them redundant. However, new technologies such as co-bots seek to utilize and maximize the different skills and abilities of both humans and robots. The makers and marketers of co-bots argue they have the capacity to improve working conditions and empower humans to have safer and more fulfilling jobs in manufacturing, for example by automating repetitive, dangerous, dirty and heavy manual labour while humans concentrate on complex and creative activities. It is imagined co-bots will ‘collaborate’ with humans, with the humans able to ‘teach’ the co-bots using physical gesture rather than needing to learn a specialised programming language, and the co-bots might even ‘teach’ humans simple tasks, transferring knowledge taught by an expert without their needing to be present.

 

The advantages of social robots and co-bots derive from their ability to ‘learn on the job’ using Artificial Intelligence (AI). AI aims to automate, supplement or even supercede human learning, however it is not without problems. The topic of bias in machine learning frequently hits the news, most recently in the UK A-level results scandal, where students who missed their exams due to the COVID crisis were given predicted grades based on an algorithm shown to discriminate against working class and BAME students. Algorithmic bias has also been detected in healthcare and policing, whilst the automation of management in workplace and manufacturing environments has been argued to seriously undermine workers’ conditions. The UK government has expressed concerns that the ownership and use of new technologies, processes and robots is likely to make businesses richer and more powerful, which can lead to the exploitation of workers by monopolistic companies like Uber and Amazon, creating returns that do not flow back to workers.

Some fundamental philosophical questions

Questions of purpose are the foundation of any theory of learning. Education and training – human or machine – is always for something or someone. Questions of technical intervention can’t be divorced from an understanding of the social, economic and political structures. Some theories aim to mould learners to fit the demands of the nation or economy. This requires humans be reduced to nodes in the function of an overarching system. Higher capabilities, quirks or differences are shut off or reduced, and training focuses on behaviour or outcome, rather than any inner process of transformation. In this sense, mainstream theories of learning are hardly theories of learning at all – because they focus on outputs rather than inner process.   The same is true for machine learning – machines may attempt to model or replicate human learning, but usually with the intention of automating ‘human’ tasks, already pre-defined in economic terms – the specific technique that is used is less important than achieving the outcome: they are also locked into a system which subordinates the will/creativity/desires of any individual to the profit motive.

Questions of learning also raise fundamental ontological questions about human nature, and what kinds of relationships between humans and machines are possible and desirable. Some perspectives view machines simple tools for humans to realise their creative projects and desires. Others believe humans are transformed in their relationship with machines, into hybrid or cyborg creatures. Still others fear that intensified capitalism means humans and machines are simple cogs whose will is subordinated to an overarching sociotechnical system that functions autonomously.

Will Future humans become cyborg entities to meet the demands of the economy? (Shutterstock)

Fields such as Human Computer Interaction have involved researchers engaging with ethical, legal and social concerns on empirical, pragmatic and technical levels. However, considering these questions in the EPSRC’s terms of ‘manufacturing the future’ by visioning factories in 2030 raises the potential for more speculative, futurological and utopian thinking. Product and fixed capital investment and innovation therefore involves companies and organisations engaging with these issues.

Some questions to be addressed include:-

  • What organizational forms will influence technological innovation in the future?
  • Should change and innovation be imposed from the top-down, or organized from the bottom-up?
  • How will humans and machines relate and integrate in the future? Will they remain separate entities, or form hybrid cyborgs?

 

Responses from Critical Sociology: A framework

The response to these fundamentally philosophical questions from critical sociologists and other social theorists has been incredibly varied, and I have mapped an extensive range of perspectives elsewhere in a co-authored article. But a ‘boiled down’ version of this sociological framework is summarised below and in the following table.

To explore the potential to use this framework in Chatty Factories, I adapted it in order to undertake a pedagogical (teaching and learning) intervention with roboticists, with the aim of re-imagining human-machine interactions.

The premise for the framework was that human-robot learning is based upon certain implicit philosophical and paradigmatic assumptions about human nature, the nature of technology, and the social, political and economic structures and relations within which humans and technology are embedded.

In the context of the framework, the variables are defined as follows:

Utopian theories invest emerging technologies with optimism and hope for a more democratic, inclusionary or liberated future.

Dystopian theories invest emerging technologies with fears that the future will be more exploitative, and humans will be increasingly alienated and exploited.

Strategic/tactical theories believe that both positive and negative outcomes are possible, depending on who controls technologies and/or the particular technologies that are chosen and used.

Humanist theories view humans as the ultimate source of creativity and agency, although this can be more or less alienated (removed from the human), if the human is exploited.

Posthuman theories understand humans to be formed in the context of their relationships with nature and technology. Creativity and agency emerge from the interactions between parts, or assemblages, rather than essentially from the human.

Will future humans and robots learn and play together in ways that emphasize their unique skills and differences? (Shutterstock)

A summary of the framework presented in the article is provided below:

There are two different intersecting dynamics at play here.

The first (vertical) axis is about the extent to which technological connectedness associated with Industry 4.0 has the potential to be democratizing and empowering for individuals and communities.

The second (horizontal) axis is about the boundaries of the community – should only humans be included, and if so, can they treat animals, nature and machines as tools in realising their desires? The humanist would say yes, but the extent to which one might imagine they would actually do this in a harmful way depends on the view of human nature – which is essentially contested, even amongst humanists. The posthumanist would say that since interaction with tools changes the human too, it is both empirically (in practice) and normatively (ethico-politically) wrong to draw the boundaries of political community at the human.

A critical intervention in robotics 

The framework represents the range of possible worldviews and perspectives concerning human-machine interaction. Adopting one of the perspectives is like putting on a new pair of glasses that helps you see the world in a different way – it can be fundamentally challenging and paradigm-changing, like a Gestalt-shift. Small-scale technical interventions, of the type initiated within this project, are unusual in the field of critical sociology and social theory, which generally seeks to understand and analyse larger-scale social systems and structures.

The methodology for the intervention drew on some of my previous work. I used techniques loosely inspired by critical pedagogy, a form of teaching and learning drawing on the work of Paulo Freire and others that acknowledges that learning is never politically neutral. The purpose of education is not to transfer knowledge as immutable truths but rather to facilitate critical thinking about the ways in which power relationships are embedded in education, society and culture. Mapping perspectives, and engaging with alternative perspectives, can facilitate people to identify the benefits and limitations of their own perspectives, and to engage critically with other perspectives in order to develop or transform their own. It can facilitate thinking about what would happen if you change perspective, and can help communication and dialogue between perspectives.

On a more technical level, the intervention sought to look at ways in which humans could interact with co-bots in new ways which did not involve the robot simply mimicking or automating the actions of the human, but rather involved a mutual process of learning between human and robot.

I developed a range of images based on the framework in the hope of facilitating people who know much more about robotics and machine learning than I do to situate their own perspective in these politico-philosophical debates, in the hope of revealing some of the assumptions and blind-spots in that perspective, and possibly moving to another perspective.

As a reminder, the focus of the exercise was on thinking through different perspectives on the forms that the relationships between humans, machines/robots and organizations might take in factories of the future. The images were accompanied by brief textual summaries and formed a basis for discussion and for participants to identify their own perspective or position.

I presented the 6 different perspectives on human-machine learning described above, explaining to the roboticists that:

  • Every person/team/organization fits into (at least) one of the categories
  • The typology enables you to situate your own perspective, and consider the strengths and weaknesses
  • Can facilitate thinking about what would happen if you change perspective
  • Can also help communication between different perspectives

A simplified version of the typology which focused on suggested ideas for practical implementation was also presented:

The intervention was undertaken using the following steps:

  1. Participants discuss the current methods being used for the production of the product and where they would place these on the typology.
  2. Participants discuss how these methods relate to the ‘chatty factory’ floor.
  3. Participants consider assumptions and blind-spots of existing approach, and where they would like to be in terms of changing the method.
  4. Choose at least one change to the method.
  5. Implement, assess and evaluate – are any more changes needed/desired?

Outcome of the intervention

Surprisingly, there was little disagreement as to which perspective the robotics experts at Nottingham adopted – they identified their own perspective as ‘humanist-utopian,’ and we agreed with them that their underlying assumptions did indeed seem to fit this model. The characteristics of the tech-utopian, humanist model which the researchers self-identified as relating to included: the view that technology/robots are tools that tools stay under the control of human users; that tools increase users’ power and abilities (in the field of production) and are thus desirable; that robots are useful because they can do things that humans and robots cannot physically or ethically do; that robots will lead to a more productive economy that benefits everyone; and that AI may prove better at solving universal human problems. We discussed where these assumptions might have come from, and it was agreed that they are currently hegemonic (mainstream) in manufacturing.

In practical terms, the kinds of human-robot relations that this kind of perspective suggests include: turn-taking; robots being used to replicate human expert knowledge; and combined measures of human and robot outcomes (e.g. optimisations using AI) but with assumed human outcomes being the standard measure of value (e.g. increased productivity).

After presenting the alternative perspectives and facilitating a critical discussion on their differences, assumptions, blind-spots and possibilities, the robotics experts at Nottingham decided that they wished to move towards a tech-utopian, posthuman model. The researchers were enthusiastic about questioning dominant assumptions about human nature, and about the limitations and possible alternative relationships between humans and robots. However, they were less eager to question dominant assumptions about existing social and economic relations, seeking to continue to focus on values such as efficiency in terms of fast and low-cost production. This is unsurprising for reasons outlined in more detail below, which we believe relate more to the limitations of the intervention and institutional constraints than to the willingness of the robotics experts to consider the alternatives.

The tech-utopian posthuman model that the participants chose to move towards asserts that technologies can subvert binaries, e.g. by empowering marginalised or oppressed people, without needing to fundamentally question the dominant socio-economic system.

Practical ideas for implementation of posthuman ideas that were discussed during the intervention included combining human and robot data through enhancements (like prosthetics) or data assemblages; different humans using and teaching robots in different ways so that both the human and the robot adapt to working together; ways of overcoming binaries and/or hierarchies between humans and robots; acknowledging humans’ animal nature e.g. through the use of tactile technologies; and thinking through different ways of seeing, for example through augmentation or virtual reality. In conclusion, the participants chose to experiment through use of a light wand as a different form of tool to allow humans to teach robots and this choice is expanded on below.

Overall deep-fuzzy approach to capture the characteristics of a dexterous human expert

 

The rationale for the practical changes from the perspective of the robotics experts is as follows (with thanks to Mojtaba Ahmadiehkhanesar for this contribution):

The human expert has already learnt to perform their tasks with high dexterity and efficiency over time through experience, and trial and error. However, not all human experts know how to program robots and transfer knowledge to the robot. The magic wand is a modified version of a commercially available user-familiar tool which resembles a real industrial device or just its handling part. The main reason for this choice is that it can be used with minimum training to capture the movements of a dexterous human-expert arm. The movements can be captured in terms of gyroscopic movements: 3D orientation of the tool with respect to the component. 3D positioning of a human arm is of a high interest as well. To position the tip of the tool with high accuracy, the point cloud captured from it using a stereo camera is analysed using fuzzydeep intelligent approach. Since the human expert already has the knowledge of handling the magic wand, this may reduce the total knowledge transfer time to a robot considerably after real implementation on the factory floor. It is further possible to 3D print new cases for the magic wand with the same specification as the real-world tool in terms of shape, weight, and etc. to make sure that the human expert can perform the task with the same characteristics as a real task to avoid loss of any valuable experience during this knowledge transfer. The light wand experiment reflects the philosophical intervention and the movement towards the utopian-posthuman perspective because it is a more tactile and embodied technology which allows the human and robot to work together in a way which ‘black-boxes’ the programming within the tool. The technical outcome of the intervention is illustrated in the videos below, which show different ways in which the robot can be programmed;

First using code, or a teach pendent:

https://drive.google.com/file/d/1jBO5G9U4WAvJvnEnylH_xZHR8cl1FLcv/view

Then, after the intervention, using a light wand, which is a more tactile and embodied way to programme the co-bot, and potentially more accessible for people without a background in robotics, or without special training or skills:

https://drive.google.com/file/d/1s-Gh7U8at9qOqgG8ZWKXRei7BiFrtXMQ/view

Challenges

We faced some challenges in undertaking this technical intervention based on critical social theories:

  1. Limitations conditioned by the disciplines we are based within, and our own previous academic trajectories. E.g. the Essex team originate in sociology and situate ourselves one or two steps vertically further down on the typology, questioning socio-political relationships and structures, whereas technical and STEM disciplines usually aim for problem-solving within existing structures.
  2. In a sense, the perspectives are holistic. They are about the entirety of social relations. Only some of the perspectives allow for small-scale tactical interventions of the kind that’s possible with a small lab team (rather than, for example, a larger organization, or national policy, or social movement).
  3. There is a level of incommensurability of perspectives on learning; e.g. a member of the Nottingham team works with machine learning, and the Essex team works with theories of human learning. The theories Essex work with say that machines can’t and shouldn’t replicate human learning – but most of the paradigmatic machine-learning techniques and automation tech attempt to do so with long historical basis; or they work on a pragmatic/ problem-solving basis.
  4. It is difficult to think of ways to make a form of learning which prioritises the human in the context of a factory and the profit motive – which is intended as a form of production to subordinate both humans and robots to the broader technology of the profit motive, and the centrality/sovereignty of the commodity/product.

Analysis of findings

Nevertheless, from the perspective of critical sociology the findings are very interesting.

It is especially interesting  that the roboticists were more open to questioning ideas of humanity than social and economic systems, which historically is the opposite of how things look like they were under Fordism, when e.g. the Soviet Union appeared to present a real and viable alternative economic model for production. In fact, this paradigm shift is also evident within the social sciences: it has become a lot easier to reject ‘the subject’ or ‘humanism’ or ‘modernity’ in very broad terms than to raise even fairly moderate questions about reforming/replacing capitalism or the state – a position which has historically seen as a respectable intellectual position, but has recently been branded as too ‘extreme’ to be taught in schools.

Currently fashionable texts in the critical social sciences like Donna Haraway’s Cyborg Manifesto focus on what would need to be done after capitalism /civilisation collapsed, ignoring this huge elephant in the room; while the broader trend of ‘identity politics’ focus on changing ‘the subject’ through self-change and ignore larger structures like capitalism and the state. The blind-spots of tech-utopianism are therefore not limited to technical or STEM subjects, but appear to be part of the present Zeitgeist, even within ostensibly critical social sciences. This expresses a taken-for-granted power of ‘the system’ against ‘individuals’ so to speak, which is actually a top-down power, but seems invisible now even to those at the top. Ignoring the existence of other perspectives, or disavowing their possibility, means that the particular system in which we are situated (i.e. the historically specific formation of state and capital) is experienced as a necessity which can no longer be questioned or challenged (which has historically been the core purpose of critical sociology).

Towards more co-operative and democratic workplaces

The UK government commissioned report Automation and the Future of Work has called for “more co-operative ownership models”, as well as “greater employee engagement, stronger employment legislation and a fairer corporate tax regime are key to ensure public support for the benefits of a growth in automation, a rise in living standards and a fair economy and society.”

Such initiatives do not need to be government-led, and there are already grassroots, social-movement and social enterprise initiatives that might serve as models to prefigure fairer and more democratic human-technology relationships in the workplace and beyond. Examples include workers’ co-operatives that are owned and self-managed democratically by workers. Co-operatives that use automation technology are rare, but they do exist. There are very successful examples the USA, and in Spain. Co-operatives have also used algorithmic technology, for example London cabbies have developed Taxiapp: London’s co-operative alternative to Uber. This follows similar booking initiatives all over the USA in a revival of localism that seeks to develop sustainable, democratic local economies. Other kinds of alternative, user-led organizations mobilize popular open-source technologies like Arduino in creative and artisanal ways include Hackspaces and Makerspaces, which exist all over the world, with 70 around the UK. While these initiatives tend to be set up from the grassroots and run by workers or members rather than government-led, governments can create sympathetic legal and tax frameworks for co-ops, as well as offer preferential access to public contracts, job creation loans and access to financial expertise. Some experts argue that in an increasingly automated future, co-ops will play a much greater role because they reconfigure the relationship between work and learning, making work more rewarding financially and intellectually, whilst supporting lifelong learning and sustainable local skills bases. They are also better at maximising human skills that can’t be automated, including empathy, creativity and human interaction.

Robots can work together with humans in ways that don’t simply replicate and replace human knowledge. As part of Chatty Factories, the Essex Team have also been undertaking ethnographic work with a range of organizations in order to study tacit and expert knowledge that can’t be systematized or automated – this is what we are calling ‘tacit knowledge’. Examples include the use of creativity and analogy, which machines find hard to do, as well as instinctual and embodied social knowledge such as care, empathy and emotional response. Further publications in this area are in draft.

Recommendations for industry

Based on theoretical and empirical research, we can recommend some tactical interventions that companies and organizations can use in order to maximize the tacit knowledge of workers. These include:

  • Considering the whole production process in order to make machines that are usable as a tool or supplement, rather than replacing humans – this means identifying forms tacit and expert knowledge that you can’t systematize or automate, and ensuring that it is valued as uniquely human work.
  • Maintaining the importance of the machine-human distinction. Humans should keep the directive focus. In creating and investing in technology and fixed capital, companies should consider investing in technology that is open to multiple uses, making sure the machine doesn’t predispose how the human uses it.
  • Respecting the difference is important; machines may “think” in purely instrumental ways but humans don’t, so we shouldn’t try to model machines on humans or make humans work like machines, but rather, we should make human-machine assemblages which play to the strengths of the two different modes of being.
  • It is also important to consider and account for two different ways machines can interact with human creativity. First, there is machine-as-tool, effectively as expansions of the human will, for example CAD lets someone dream up and make some object. Second, there is a more artisanal process where understanding and mastering the tool is part of the desire. The former is aided by black-boxing (ignoring the make-up/programming behind the tool) and the latter hindered.

In future research, we hope to offer further suggestions for ways for humans and robots to work together that are creative, empowering, politically transformative or socially ethical/equalising.

Comments are closed.