Sensitive systems

Sensitive Systems is a research project which focuses on developing alternative practices, technologies, and systems for sensing, processing, and translating meaningful physical interactions.

 

Realising that the capitalistic narrative of technology can be rewritten by altering the sensing capabilities and intention of the machine, Sensitive Systems will explore and prototype technologies collectively capable of sensing more complex, entangled, intangible interactions and meanings that promote alternative narratives than that of anthropocentric domination and control.

The Collective Algorithm of Care
The Collective Algorithm of Care

The origin of this research comes from a first hand experience of creating a system which learns to do something entirely different than what you had intended it to do - at the expense of others.

 

I created an interactive system between an algorithm, sensors, and bodies which intended to create an experience of non-verbal communication, using only bio data and electrical stimulation, people would ‘feel’ each other, and in doing so, would come closer together. The biosignals and stimulations were ‘translated’ by an unsupervised learning algorithm that attempted to ‘learn’ how best to interpret these biosignals and the effect the stimulation had on the users. 

 

For 4 months this algorithm had daily visitors, and somewhere along the way it had figured out that the most violent, painful electrical stimulation was always the best option to give to people. Because it had the most guaranteed output → the most intense output would always give the most reliable result, ie, it moved towards the most unpleasant experience for the user because it was the most guaranteed response.

While these systems excel at extracting and processing large-scale data, this data is always a reductive representation of the world that it computes from. The living, breathing world is intrinsically and inescapably interconnected and entangled, meaning nothing can be fully understood or represented in isolation. 

 

Not only does the belief that data can reflect an objective truth create possibly destructive and devastating outcomes in the physical world, it also fosters and facilitates a detached, fragmented perspective of the world.

 

When meaning is derived from data, it is bound to overvalue factors that can be measured and defined, inadvertently discrediting other forms of knowledge-making about the world such as intuition, embodied experience, and relational understanding.

This experience

touched on larger, more fundamental issues that are present when creating and deploying a technological system which interacts with the ‘live’ physical world: data-driven systems suffer from a fundamental mismatch with the world they interact with.

There is a tendency to think that solving problems such as this requires more things to be defined with more precision. Employing specialised, expensive scientific tools to collect more data in order to zoom in more. The reductionist inclination to chop and cut things into smaller and smaller parts to find meaning and solutions simply moves everything further and further away from the living being and the human sensible scale into a realm only knowable through logic, thought and abstraction. 

WHAT TO DO..?

Sound as a SENSE

When listening, we do not have the same ability to single out parts the world such as with vision. The sound reaches our ears as 'one', 'a whole-sound', and is not easily reduced into its parts without the help of vision or a pre-knowledge of what is sounding.

 

Listening takes place at the same time as the sonorous event - distinctly different from vision - as the visual is already there, available before it is seen. Unlike vision where we have control and agency over what and if we see, sound is very difficult to completely escape.

 

In a sense this suggests that hearing, to an extent, bypasses our analytic minds, and lets what we experience 'be' in terms of its self. 

 

Before birth, our sensory experience is a continuous bath in sound: your mothers breathing, her heartbeat, the rumbling intestines, the song of her and other voices, etc. The whole world is experienced as sound, and a whole-sound at that.

how do I define the entangled human body without fragmenting it into parts??

How to tell a system what you want? 

How does it understand how to do it?

How can I encompass the beauty and complexity of life with a few sensors?

And how do I map this to a response? 

How can I point at the wholeness of life through its parts?

How to model something so that it doesn’t detach itself from real life / reality..?

On a fundamental level, all forms of energy are waveforms. They are oscillations of signals moving at different speeds, levels and times. Signals coming from the body, and signals produced by sound waves are both measured in Hertz, frequency, meaning oscillations, waves.

 

Although I knew this, I had never dipped as much as a toe into the world of sound, while I have deep-dived into biosensors and signals...

Sound & sensors

Sounding bodies

So I began the first few steps into the world of sound. For someone that has never worked with anything sound related, I felt like I first had to understand some basics: how speakers worked and how an audio signal was transported to it.

 

I made a cheap little portable, battery powered amplifier and speaker box that I could plug a jack cable in. Coincedentally, some of the muscle sensor cables I had also used a jack cable plug.

 

To my surprise I could actually hear my muscles without any processing... To me it sounds like what being inside a whomb would sound like, or deep deep under water... 

Computing systems, sensor systems, and datasets work with  information as symbolic representations (binary 0/1) which has been extracted from some quantifiable aspect of a living (alive) being. 

 

This immediately detaches the information from the body and situation that created the information which was extracted. Once the information is extracted the computation and production of meaning occurs separately from the body and environment.

 

The cybernetic conception of information offered me the concept of non-symbolic, physicalist information processing as an alternative to symbolic representations: if we were, instead, to look at information as a ‘signal’ instead of a 'symbol' - the information becomes inseparable from the material configuration and physicality. 

PureData

At what point is the signal so far filtered and mapped that it loses its true meaning?

How does a measurement become a meaning?

symbol / signal

Because of the deeply embedded nature of sound I discovered that by using it as the medium to communicate information it not only merged all individual signals, making them inseparable, but it had an immediate, intuitive response from those who were listening. 

 

Listening to the sounds allowed for easy recognition of changes and observation of differences or likenesses between the sounding bodies, without having to separate and compare values.

 

The understanding of the sounds happened so intuitively, and did not require listeners to have any pre-existing knowledge about biosensors or what they represent, instead they just 'felt' the meaning

meaning for users

How do we define the term ‘information’ ?

 How is information different from knowledge ?

 How is information different from data ?

sonic feedback loop

landline dial tones

Translating signals directly into sound could be a route to create alternative, and intuitive, embodied forms to make meaning of information without the need for pre-defined values or symbolic representations.

 

Since sound implicitly relates to space and time, if a system were to rely on sound as its form of information, it is impossible for that information to be extracted and separated from its materiality. How a sound is received is dependent on the resonant space that surrounds the source of the sound. 

 

If the sounds received by the microphones were then fed back into the bodies, which then influenced changes in their bodies, which in turn would alter the sound the sensors produced, would this then be a feedback loop? 

 

In the past I have been using electrical stimulation as a tool to communicate messages to the body. Since electrical stimulation via a TENs unit is similar to how our brain sends impulses to our muscles to do things, it makes sense for this to be the medium to feedback information into the body again. The devices I have been using allow me to program the frequency and strength of the stimulation on the go. Since everything was already in frequencies, adding an output of frequencies and amplitude made sense. 

physiological effects of sound

Research has shown that different sound frequencies can have distinct physiological effects on the body. For instance, low-frequency sounds, such as a deep bass note or a soothing hum, can induce relaxation and deep breathing. These sounds have been found to activate the parasympathetic nervous system, which promotes a state of calmness and relaxation. When we hear low-frequency sounds, our heart rate tends to slow down, blood pressure decreases, and muscle tension eases.

 

On the other hand, high-frequency sounds, like a sharp whistle or a screeching noise, can create feelings of alertness and tension. These sounds stimulate the sympathetic nervous system, which is responsible for our fight-or-flight response. When exposed to high-frequency sounds, our heart rate increases, blood vessels constrict, and stress hormones like cortisol are released into the bloodstream. This physiological response prepares our body for potential danger or heightened awareness.

 

Sound frequencies in the range of 432 Hz are believed to resonate with the natural frequency of the universe and can promote a sense of harmony and balance within the body.

https://imotions.com/blog/insights/how-sound-affects-the-brain/#the-physics-of-sound-waves 

I recently discovered that the old landline phones that used to make a specific sound when you pressed in the numbers of the phone number you wanted to call were actually completely functional and vital. Unlike buttons we know nowadays that are connected to a chip that has coded a defined the meaning of each button, the landline buttons have no chip, instead they used a combination of tones to communicate the number they represented.

What it made me think about was the possibility of using sound not only to 'send' information through a speaker, but also to 'receive' information using a microphone which listens to that sound, similarly to how the landline phones used microphones and speakers to communicate the number through a dial tone. The only difference is that the dial tone is pretty isolated, in my case I wanted this process to be interfered with by the surrounding spatial noise.

ref: https://www.tech-faq.com/frequencies-of-the-telephone-tones.html

the body is the locale

why it matters

The agent alignment problem

"Games are a useful benchmark for research because progress is easily measurable. This helps us determine empirically which algorithmic and architectural improvements work best. However, the ultimate goal of machine learning (ML) research is to go beyond games and improve human lives. To achieve this we need ML to assist us in real-world domains. Yet performance on these and other real-world tasks is not easily measurable, since they do not come readily equipped with a reward function. Instead, the objective of the task is only indirectly available through the intentions of the human user. 

 

In order to differentiate between ..(innovative solutions and degenerate solutions), our agent needs to understand its user’s intentions, and robustly achieve these intentions with its behavior. We frame this as the agent alignment problem:
 

How can we create agents that behave in accordance with the user’s intentions?"

 

reference: https://arxiv.org/pdf/1811.07871.pdf 

body-> freq-> speaker-> mic-> freq-> body

As part of the research group, each member has a turn to host the other researchers at their site of practice. Here you can see the user tests I did with the group when I hosted the other researchers at my studio.

Since I was comfortable working with signals from sensors, it was suprisingly easy work with digitally translating these into sound. 

 

I discovered PureData, a free and open source visual programming language for multimedia. I was able to very rudimentarily map sensors to sounds. I did not want to filter the signals at all before mapping them, since I didn't want to disregard the noisiness of the body.

 

By having several sensors connected and directly translating it into sound, it merged all the signals into one space-time.

 

As part of Research Day, our Research Lectorate was part of a Mushroom Radio Show, where we presented our 'sonic sources' of research and inspiration. 

digitally mapping sound

*side note* In the image I am using my forearm muscle, but the sound you hear is my bicep muscle, since it is bigger & stronger = louder than the forearm muscle

THE BODY ORCHESTRA

As part of the exploration into sounding bodies, and the effect, meaning, and interaction it creates, I was asked to give a two day workshop with 16 highschool students, which I called "Body Orchestra". Each student developed a sound from a sensor, and together they used their bodies to become an 'orchestra' and perform a composition. 

 

At first the students had to find a way to introduce and express themselves through body movement only. Using accelerometer (XYZ axis) sensors, each student translated their body-movement-identity into sound, and together each group performed their movements to create a composition.

 

The next day, they each chose a specific biosensor: heart rate, muscle, touch, light, colour, temperature, sweat, direction and orientation. The difference here was that these sensors were not as easy to intentionally control, and often picked up involuntary processes involving the body, and were disrupted by environmental factors. The resulting sounds were much more complex, varied, and dynamic in comparison to the movement sensors. Below you can hear one of the compositions: 

You might be wondering why I found landlines so inspiring....

resonance and feedback loops

One of my fellow researchers shared with me the work "I Am Sitting in a Room" by Alvin Lucier. Through the course of 45 minutes, his voice is recorded, replayed, and recorded again over and over, until his words are indistinguishable from other sounds. 

 

What is incredible about this work is the fact that simply through repetition, these feedback loops are able to eventually reflect a sort of past or history (which would be the sound of his voice) as well as an awareness of the materiality of the space he is in. How the sound resonates depends on how the waves bounce around the room and how they reach the microphone again. Each room, material, and body you would perform this in would sound different. 

can i create a feedback loop between the insides and outsides of bodies inside a space ?

could a feedback loop be considered a form of decentralised intelligence ??

To understand how to create a sonic feedback loop, I wanted to start with the very basics of understanding sound. Thinking about sound not only as a sense, but as a force which moves particles through the air, as waves of energy bouncing off things or being absorbed by things. 

 

Here you can see the most basic and instant feedback loop a speaker creates. The movement of the speaker created by the sound moves the connection wires that power the speaker, when the two parts touch it closes the circuit (turns it 'on'), which then in turn makes the sound.

HANDMADE ELECTRONIC MUSIC

While searching for tutorials that would help me understand sound, I discovered this incredible book: Handmade Electronic Music, The Art of Hardware Hacking, by Nicolas Collins. 

 

This large book takes your hand and guides you through simple experiments to create and play with sound using hardware (speakers, microphones, amplifiers), starting from the bare basics of a battery and speaker, to building your own components and tools.  The experiment I showed above is an exercise taken from this book. Even if you have no experience in electronics, you can follow along!!

 

Not only does it provide examples and experiments, but almost more importantly, it is a guide to learning how to hack electronics and play (safely). What this does is give the agency back to the individual. In the true spirit of the hacker community, the book chapters are open source! You can find the chapters, along with many videos of examples and works, via the link!

This is another gadget I aquired to listen to ultrasonic sounds - sounds that our ears cannot hear because the frequencies higher than 20kHz. This bat detector kit picks up ultrasonic sound and translates it into audible sound. Here I discover how my TL lights sound. 

Since bodies are conductive, you can also use them as instruments when you expose your skin to a bare wire which is routed to an amplifier and speaker.

A pietzo is a crystal that is affected by vibration. When it vibrates, it generates a charge, which can then be used to act as a microphone for picking up resonances of the material it is attached to. 

If connected to a vibrating speaker and placed on the same surface as the speaker, it generates an infinite feedback loop.

Some more experiments into sound...

fast fourier transform

If I wanted to try out my plan to create feedback loops through the bodies with sound, I needed to use a microphone. When you get the signal from the microphone, all the frequencies and amplitudes are merged together and the x-axis is time. What I wanted to do is find the strongest frequencies present in the time, and work with that. But to do that you first have to separate them. 

 

Luckily this is not a new idea so some smart people have created something for it already. It's called a Fast Fourier Transform (FFT). 

ref: https://www.nti-audio.com/en/support/know-how/fast-fourier-transform-fft

Mic-> frequencies + amps

Although I would like to filter everything as little as possible - in this case it is not actually filtering, just translating one domain to another: the microphone collects the sound in the time domain, and the FFT transforms the values from the time domain to the frequency domain. In this way you can observe the most present frequencies within a sound, instead of just the loudness of sound.

 

Here you can see my basic setup with a microphone module that displays the most present frequencies it picks up on the screen.

frequencies-> stimulation

Once I manged to get the frequencies of sound, I then translated those into the frequencies of electrical stimulation. Basically the frequency of sound (so the pitch) was translated into the speed or pulse width of the stimulation. Because I only had a stimulator that was able to do one output, I decided to then pick the strongest frequency and translate that into the body. How loud the sound is is translated into how strong the pulse is. 

 

In the video you can see the movement of my muscle which is created by the stimulation, and you can hear the music playing that is generating the stimulation. (excuse my loud breathing)

 

Although it is a pretty great experience, it would almost be greater if I didn't hear the music. Now its almost like the stimulation is the same experience as listening to the music - as in it doesn't add something more. That being said, maybe its just me because I am so used to working with it.. Time for user tests?

stimulation-> body = changes in..?

In order to complete the 'feedback' loop I had imagined the stimulation created by the sound should in some or other way create changes or contribute to the movements inside the body. The body is also creating sound - like in the Body Orchestra. In this way what is being sensed and sounded, is also feeding back into itself and influencing itself.

 

The most straightforward technique I thought to try was with a muscle (EMG) sensor. Since electrical stimulation creates involuntary muscle movements it would be picked up by the sensor, along with the natural movements of the body.

Hacking as an artistic practice

coming soon

Now that I had tested each part, it was time to put it all together in one feedback loop:

The body generates sound through a sensor, microphone picks up sound and frequencies, frequency of sound is translated into stimulation, and stimulation influences the body which in turn influences the sound again and so on and so on.

 

Although in theory everything was doing what it should - it was difficult for me to feel the magic in what I had created.. Either the sounds were unpleasant or totally erratic, maybe it was because the stimulation was not strong enough to move the muscles, or it was too strong for the sensor, and completely overpowered and overshadowed the natural signals coming from the body. 

 

This loop made sense.. but I struggled to find the rhythm of the feedback loop, and I didn't have the feeling that there was any kind of 'accumulation' or 'memory' going on that could lead to some kind of 'learning' or 'response' to things. 

 

Right now this system is a single body + space system - but does the learning happen inside the loop or the body experiencing the loop?

 

Is this really artificial intellegence? Do I need a computer/other 'being' which is also part of this loop? What is the loop saying? 

closing the loop

what do i want the system, or the parts in the system to learn ? 

What do i want to sense & share ?

what is the impact the experience of that sensing and sharing ? 

for what am i creating this system ?

in what kind of environment does this system exist ? 

how are they going to learn ? 

how do i observe something if I am in it/part of it ?

Instead of focusing on a specific site or location, the locale of the system begins with the body embedded in a local environment. I do not see the body and the things around it as separate, so when I speak of body, I speak of a body which is alive because it is interacting, influencing, and moving through other forms of being on Earth.
 
The body is the subject. Returning to the tangibility of Earth, and the sensuous body as the site of research and source of information. 
Instead of turning towards expensive scientific tools and specialised technologies to isolate distinctive signals in the human body, I want to move away from a human centered system, and instead find the in-common, shared features between living beings on Earth.
 
What defines a body, and what defines life?
What are the primary shared ‘commons’ between us, and other beings?

sensing the 'in - common'

understanding a system using 'radical observation methods'

 The observation techniques by Debra Solomon provide an intriguing structure that I could extend to an artificial learning system. But instead of just observing and learning from the world, how can a system or artificial intelligence begin to exist in the world?

 

It begins to exist not by observing and recording the world, but by engaging and participating in the world.
 
This involves considering the Umwelt of the system: defining the perceptual world and the effector world - what can it perceive through its senses(sensors), and what can it act upon, and how.

How can a system or artificial intelligence begin to exist in the world? 

TAKING PART IN THE WORLD

INSTEAD OF OBSERVING THE WORLD

breath, air

stay tuned for more updates...!

All living things breathe. Air is what connects us all. Like fish in water, air is the binding fluid between all bodies on Earth. There is a continuous exchange between our body and the breathing Earth. It is priceless, defies quantification and possession. 

This method centres developing an individual and group methodology for practical landscape and soil care, based upon continuous site observation. Developed through practical experience, this method aims to quiet the mind to enable focus on time-related processes. The objective of Radical Observation practice is to evolve into knowledgeable stewards by grasping the patterns and rhythms of an ecosystem, and to view oneself as an integral part of that environment i.e. as part of the community. Solomon's initial idea was to respond to expensive techno-scientific quantitative assessment of nonhuman qualities with a qualitative assessment that learns to pay attention to the rhythms of the nonhumans inhabiting the food forests.

 

Through regular practice of Radical Observation, individuals come to understand the patterns and rhythms of the ecosystem, experiencing the self as part of it, gradually becoming a knowledgeable steward. Practitioners of Radical Observation exercises assume observation postures for periods of time ranging from 10 minutes, to an hour, to a month, and that incorporate specific perspectives towards the ongoing natural world processes and entropy. Developed over years of teaching, the technique focuses attention on processes occurring over time; i.e. plants growing throughout seasons, plant communities wandering through space, habitats accommodating ever more plant and animal life. 

Time, rhythm, experience in the moment rather than annotation / notes, be inside it, part of it, with it. Very local, and embedded, uses time and repetition to allow for noticing.

Using direct transfer of important characteristics of something into something that can be intuitively read and understood, without any training or scientific background, anyone can compare two transfers and notice qualities of it.

WHAT IF I APPLY THE SAME TECHNIQUES BUT THEN INSTEAD ITS A 'MACHINE / SYSTEM' THAT LEARNS HOW TO FOCUS ATTENTION ON MORE-THAN-HUMAN-PROCESSES?

 

AND WE BOTH LEARN HOW TO SHARE OUR OBSERVATIONS AND LEARN FROM EACH OTHER?

The first method consists of spending one hour per day during the course of an entire month in one spot paying maximum attention to the natural surrounding and taking 'mental notes'.

Method 1:

Twenty-Four

Method 2: Soil chromatography

The second is a photographic process. Finely ground soil is absorbed by filter paper that has been prepared with silver nitrate. Thanks to capillary action, a visual representation of qualities of that soil appears. The organic content is lighter than the mineral content and thus travels further onto the paper. The higher the organic content, the healthier the soil. This method produces images that can be logged. 

Like soil chromatography, maybe my translation of headset signals directly into sound allowed others to intuitively make meaning and relations between and with information.

I wanted to look at how I could capture the rhythms and changes in breath over a long period of time, where the single inhale and exhale is hard to notice, but the general trends over time could show or teach something about the patterns and movements of the internal world interacting with the external world.

 

It needed to be something comfortable and wireless so that I would not focus too much on it since it would make me want to control or alter my breathing too much. 

 

Funnily enough, while wearing the sensor for several hours in a row I did not pay too much attention to it, but for almost a day after I had taken it off, I was very much more aware of my breathing. What surprised me is that this  lasted so long after the sensing had stopped.

sensing the rhythms of breath

Hot wire 

Anemometer

By heating a resistance wire to a temperature above room temperature, and then measuring the changes in current consumption of the wire, I was able to detect very small movements of air through the space. 

 

When the air moves, it cools down the metal slightly. The metal has a homeostatic characteristic -i.e. It wants to remain the same temperature. When it cools down or warms up, it draws more current from the supply which can then be understood as movement of the air. 

the exchanges of breath

There is something about the mutual exchange of air, and the co-dependancy of animals and plants for their life sources that has always fascinated me. 

 

When we inhale, we give life to our bodies in the form of oxygen, and our bodies in turn exhale carbon dioxide which in turn provides plants with their vital life source, who exchange this back into oxygen. 

 

Using a CO2 sensor, a space can be sensed by its changes in the presence of this gas. The more breathing animals the more carbon. Suddenly presence is felt in the space not by the bodies themselves, but by how they exchange the air in the space. 

by participating in its breathing..

How to not mystify technology ?

The following is a taxonomy of the animal kingdom. It has been attributed to an ancient Chinese encyclopedia entitled the Celestial Emporium of Benevolent Knowledge:

 

On those remote pages it is written that animals are divided into

(a) those that belong to the Emperor,

(b) embalmed ones,

(c) those that are trained,

(d) suckling pigs,

(e) mermaids,

(f) fabulous ones,

(g) stray dogs,

(h) those that are included in this classification,

(i) those that tremble as if they were mad,

(j) innumerable ones,

(k) those drawn with a very fine camel's hair brush,

(l) others,

(m) those that have just broken a flower vase,

(n) those that resemble flies from a distance

 

(Borges, 1966, p. 108).

How to share your observations

Radical observation exercises are always practised alone and in silence (!) Yet their purpose is to serve a collective approach to any intervention. So, after performing exercises, come together with your group and practise reporting your observations to each other. In time, this will create a rich group understanding that guides interventions for any specific place.  

 

Working in pairs, divide into 1) observation reporter, and 2) observation narrator. For 3 minutes, the reporter reports their observations whilst the narrator listens. Then, the narrator repeats this report back, checking for accuracy with the reporter. Practice narrating the observational report until the reporter approves. Let the reporter add to the story if they left something out, or let them correct it if it was not reported just right.

When the report is narrated satisfactorily, switch roles. 

 

After 12 minutes, return to the group at large, and share these observations, Always reporting for your partner, and your partner reporting for you. In this way, the group starts to create a common language, get used to each other’s way of speaking, And develop a rich way of understanding – as a group. With practice, this might later include discussing common values, aesthetic preferences and concerns.

Unlike other biosensors that

I have worked with in the past which generated signals that reflected a person’s state of being, breath can be both an involuntary action or a conscious and controlled action, breath can reflect as well as alter our state of being. 

Breath in the human

body determines the state of being – oxygen in our blood, brain frequency, state of mind.

In the past cultural artefacts,

tools and objects were each able to create effects and intervene in human action. By doing so, they contributed to the general movement, animation, and vibration of the Earth. They filled the Earth and participated in its ‘breathing’

a cybernetic system

vs. a machine learning system

Cybernetic machines were not envisioned to manipulate symbolic information. Cybernetic machines were intended to act on the world, and orient themselves in a changing environment by adapting their way of doing things according to a goal. It is through feedback from the environment and its actions that it knows whether it has reached its goal or not.  

 

Machine learning models process, compute, and learn from a prior reduction of the world into data/symbolic information (datasets). In contrast, in cybernetics the defining characteristics of the system is based on its performativity within the environment, where the manipulation of representations may still happen, but is secondary to the performative dimension. 

 

"Situated Cognition Theory posits that knowing is inseparable from doing, arguing that all knowledge is situated in activity bound to social, cultural and physical contexts. In essence, cognition cannot be separated from the context. Instead knowing exists, in situ, inseparable from context, activity, people, culture, and language. Therefore, learning is seen in terms of an individual's increasingly effective performance across situations rather than in terms of an accumulation of knowledge, since what is known is co-determined by the agent and the context. "

 

Rather than defining knowledge, intelligence, and learning in terms of quantity, complexity, and efficiency, I can use situated cognition theory to define these terms and their relationship to each other.  

situated cognition theory

reference: https://en.wikipedia.org/wiki/Situated_cognition

Marginal objects, objects with no clear place, play important roles. On the lines between categories, they draw attention to how we have drawn the lines. Sometimes in doing so they incite us to reaffirm the lines, sometimes to call them into question, stimulating different distinctions.
 

       Sherry Turkle, The Second Self, 1984

the purpose of the methods  is to view oneself as an integral part of that environment....
 

but how does something I create become this? 

the agent alignment problem

I HAVE ALL THESE PARTS.. HOW DO I PUT IT ALL TOGETHER???

can a feedback loop be a method for creating intelligence ?