pcomp: [The BearBooth] Final

After the box was fabricated, the greatest challenge was figuring out how to mount the motors, which involved a significant amount of planning.

We knew we wanted the motors to be attached on the back wall of the box, which meant needing to cut holes in both the back acrylic and the box itself. We decided the easiest way to do this was to laser cut precise holes in the acrylic for the motors to sit and then cut the back of the box a little more haphazardly since it was more difficult to cut the wood.

The motors fit!
The motors fit!

We velcroed the back panel of acrylic to the wood so that we could easily take it out to adjust the motors. We hot glued the motors to the acrylic, but ended up velcroing a few of them in the end since it became apparent they would need adjustment or maintenance quite frequently.IMG_5483

The last major fabrication hurdle was mounting the bears to the motors. I’m sure there are more elegant ways to do what we did, but after much trial and error, we decided to use small wooden dowels in the shape of a cross that fit into the bears. This provided a stiff armature which to attach the the bears to the servo attachments with none other than the infamous hot glue.

We also played a lot with the positioning of the bear on the motor since the servos are particularly strong. If the attachment was too low, the bears would slump over when they were turned on.

IMG_5479

kc2f6Z

IMG_5490

 

When we put the whole thing together, we found that our soft switch, which was our very first version, was acting a little temperamental. We had to back up a few steps and quickly realized that the foam in between the conductive fabric was slipping and allowing the circuit to close indefinitely. We remedied this by trimming the conductive fabric and dabbing a bit of hot glue on the foam so it would hold in place. This worked brilliantly.IMG_5459

Wires run behind the acrylic
Wires run behind the acrylic

At this point, our electrical system was working very well — lights, camera, sound, motors — which was a delightful achievement. Dare I say we had fun??

Paula arranges the BearBooth
Paula arranges the BearBooth

IMG_5488

Fritzing Schematic

Unexpected Time Consuming Tasks

1. Camera placement: We played with this A LOT. We wanted to capture the face of the user as well as some of the box.
2. Bear placement on the Servos
3. Light Color: We wanted one that was flattering

Class Comments

1. Add something to direct the user’s eyes towards the camera as it currently looks like people are looking down.

2. Adjust the code for when the picture is taken — it is being triggered 5 seconds after initially closing, but then takes photos instantly afterwards. It would be ideal if it would wait 5 seconds everytime so that it captures the reaction of the user.

3. Have the lights change color constantly and perhaps trigger the camera when the lights turn white.

Presenting in Class

6B4A0203

6B4A0260

6B4A0262

6B4A0266

6B4A0274

Things we would like to change

1. Camera timing: This means playing with the code.

2. Projector quality: Right now, the photo looks very grainy when it’s projected, even though it’s quite clear on the screen.

3. Push the images to a flickr account, which we more or less already have set up with IFTT

After class, we went to move the project to a different spot on the floor to document it, but ran into trouble with the motors. When we plugged it in, all of the servos were dead, despite working literally 5 minutes before. We checked the power supply with a known functioning motor and the power and circuitry seemed to be ok.

Our best guess is that the motors were connected to the desktop power supply when we turned it on and an excess amount of power was being delivered and we surged the system.

This is what I will call Learning the Hard Way, which has been very thematic of pcomp in general.

We ordered a 10-pack of servos immediately thereafter and have plans to replace them on Tuesday.

At that point, we will take more thorough documentation of the final product.

PComp-9: Final Project Ideation and Prototyping

Paula and I were initially set on creating an interactive installation. We were both drawn to the idea of transporting someone outside of their current reality, even if momentarily. We also both kept coming back to the idea of playing with scale, wether that meant something oversized or multiples.

From these interests, we came up with the basic structure of controlling motors through motion via Kinect.

IMG_4969

We wanted these motors to be overhead react to a body below it – reeling something up with motion detection and then releasing as the person walked away. In this way, we felt that it would mimic a wave.
We found this inspiration after the fact:

When we discussed it at length with classmates during play testing, we received some really good feedback from which it became apparent that executing this idea may require more than 6 weeks. Aside from the time complications, I think the idea of an underwater experience became a bit murky in both of our minds over the course of explaining the project. Would people understand the analogy? Would we be able to aesthetically pull off the vision in our heads? Without a solid understanding of what the user was supposed to take away from our proposed installation, we didn’t think it would work. While this was frustrating, I think being able to understand how an idea doesn’t work is as valuable as understanding why it does.

We decided to scrap the initial idea and go back to the drawing board. During our playtest, Lisa had mentioned using mirrors to add scale to our project. Since Paula and I were initially drawn to using mirrors, we re-explored that idea. We also tried to keep with the theme of creating a transportive experience, particularly one that was lighthearted and playful.

This led us to the ’The Box’ (working title). We envisioned The Box as a mysterious entity from the outside that invites the user to peep into an opening. The pressure from the face resting on the opening would act as a switch to trigger the system. The Box would be lined with mirrors and constructed to act as an infinity room. When the system is on, lights come up, music plays along with a chorus of cheers, and teddy bears on a motorized automata bob up and down. What we are also interested in is catching this moment of surprise via an internal camera that is also triggered by the system switch. We hope that we can give users a digital photo of their face on a bear’s body as a souvenir of their discovery of this playful experience.

Bill of Materials (Costs TBD)

Screen Shot 2015-11-11 at 10.52.53 PM

Inspiration:

Yayoi Kusama's work titled: Infinity Mirror Room-Love Forever
Yayoi Kusama’s work titled: Infinity Mirror Room-Love Forever
Automata
Automata
Screen Shot 2015-11-11 at 3.42.50 PM
Highly accurate schematic

We intend to use bears we can thrift from shops in the city to try and bring the bears and one’s childhood back to life.

Our schematic is as follows.

IMG_4998 IMG_4999

The shape of the box took inspiration from own childhood. I remember sitting in department stores and playing with the folding dressing room mirrors, enclosing myself in the panels and pretending there were a million of me. It was pure delight.

We also played with a hexagon shape, but we were not sure if this would give us the effect we were going for so we started prototyping with the original idea.

Our circuitry would be as follows. The face making contact with the head hole would trigger the switch for the system. We initially thought of an FSR as a convenient sensor, but in reality, a simple digital switch would do the job. For the switch, we were given the suggestion to use conductive copper tape that would close the circuit when skin made contact. The system includes lights, sound, motorized bears, and a camera.

To prototype, at Benedetta’s suggestion, we gathered some reflective material from Canal Plastics to emulate the mirrors and play around with the shape of the box. We initially stapled the material cardboard, but realized it wasn’t sturdy enough and so switched to foam.

IMG_5013
Imagine a bear’s body on Paula’s head

IMG_4995 IMG_5001

IMG_3382

Next, we wanted to play with the circuitry. We hooked up a single button switch. From here, we started to connect some of the functionality.IMG_5080

We found some free sound files on freesound.org and connected them to the button so they would play and loop while the button was pressed using p5.js.

Then we used capture video and programmed it so that when the button was pushed, a snapshot would be taken. We wanted to delay the snapshot about 2-5 seconds after the circuit was closed to allow for a few seconds of input and acclimatization by the user. The photo is being saved to the desktop. This took a bit of fidgeting with and a guest appearance by Moon to really amp up the code elegance, but here it is so far:

CODE

Using the computer camera and speakers to simulate the effect works, but ideally, we could connect a camera to an Arduino board and make it portable. This is currently serving as our inspiration.

A way to make sound portable with Arduino.

Additionally and ideally, we would push the saved images to a blog or webpage that would display all the images the surprise photobooth had taken. For this, I know we will need some outside help.

Our next item on the agenda is to close the box with the mirrored material from above and below and connect the lights into the box before we really start finalizing the circuitry. In short, we need to work on the environment inside the actual box.

I would still like to try and design an automata using the laser cutter or even wire as I think this could save time, but ultimately I believe we are both more interested in making a surprise and transformative photobooth. A motorized automata would be the icing on top of the cake.

Pcomp-8: Midterm and Final Project Thoughts

MIDTERM PROJECT

For our midterm, Gustavo and I wanted to create a surreal user experience. We wanted to highlight the over-abundance of technology and information to convey ideas or experiences that can be simply obtained through more basic, real world interactions.

We decided to create a window that when opened, would display the weather in scrolling LED lights. We had been learning about data collection in ICM and the oft-cited example was gathering weather data. Weather is a particularly absurd example since we currently have so many ways to predict and determine the weather, but there really isn’t a viable substitute for actually experiencing the weather.

IMG_4684
Curtain snap closure switch

We wanted the interaction to be simple, almost laughable so. Our first idea was to use blinds, but the scale and complexity made us scrap that idea in favor of making a window curtain. Opening the curtain would trigger a switch that would turn on the whole system.

We decided to use a Neopixel matrix from Adafruit that Gustavo had already played with and was able to send scrolling words from Arduino.

However, our first challenge was to set up communication between p5, which was serving as our data-fetching conduit, and Arduino, which was speaking to the Neopixel.

In short, p5 would grab the weather from OpenWeatherMap.org and send the information to Arduino. We were having lagging problems, so Dano helped us set up a timer that would only ask for weather data every 15 seconds. In the code below, ID is the weather ID and inByte is the city (being sent from Arduino to p5), which we had to make sure were not undefined before serial writing to the Arduino:
//Check Weather once every fifteen seconds
if (millis() – lastTime > 15000) {
lastTime = millis();
console.log(“Checked Weather”);
askForWeather();
if (ID !== “undefined” && inByte !== “undefined”) {
serial.write(ID + ” in ” + inByte + ” “);
serial.clear();
}
}

We also ended up coding 15 different cities, each with a unique color combination of text and background.

We encountered a lot of hiccups just in the programming, but our final code for p5 is HERE and the final Arduino code is HERE.

After combing Craigslist for a window frame, we decided to scale down and use a pre-fab “window” from Bed, Bath & Beyond — a bamboo drawer organizer.IMG_4688

We also used some scrap translucent plexiglass to create the window and simply taped the Neopixel to the “window.”IMG_4687

Next, we added a potentiometer to the top of the window which toggles through the cities (New York, Rio de Janeiro, Santiago, Rome, Cairo, Tokyo, Cape Town, London, Mexico City, Delhi, Moscow, Lagos, Seoul, Beijing, Paris, Buenos Aires, Tegucigalpa, Kigali, Caracas, Barcelona, Astana, Amsterdam, Berlin, Los Angeles).IMG_4689

 

1O3A0013

FINAL PROJECT IDEAS

Since the idea of the final project was even uttered, I haven’t been able to shake the idea of working with the concept of lost and dying languages.

I think it’s fascinating that we are seeing such a sudden disappearance of languages in the world, largely due to connectivity and technology. And along with language, entire cultures disappear.

This resource also exists in NYC: http://elalliance.org/

But as often is the case, the problem can often be disguised as the solution. For my final, I would like to explore how to use technology in an interactive way to preserve or at least highlight lost and dying languages.

In an ideal world, where time and money and skill aren’t limits, I envision in my head an interactive, light-based installation. I want the installation to be an immersive experience. I want it to convey the idea that languages are dying and with them cultures. I would want it to exist on many levels.

Clearly, I’m having trouble pinning down a tangible idea, in part because I’m haven’t decided how a user would “interact” with a language.

My initial idea was to have motion detection lights synched with an audio recording. The audio would be the same word in different languages. The intensity of the light would correspond to its global prevalence.  The lights would be arranged overhead in a spiral shape that the person would walk through. I would also like some sort of projection of the name of the language.IMG_4851

Alternatively, I would like some sort of visual representation of the culture from which a sound would be emitted.

In short, I have an abstract concept and I would love some help developing.

PComp6: Serial Communication

Lab: Intro to Asynchronous Serial Communications

IMG_4651

This lab was relatively straightforward for me, except that I did not use CoolTerm, so I need to familiarize myself with this software.

Data in many formats
Data in many formats
Data in many formats
Data in many formats
Punctuation with a potentiometer and photocell
Punctuation with a potentiomet-er and photocell and button
Screen Shot 2015-10-13 at 5.57.52 PM
Code for punctuation with all 3 sensors

Lab: Serial Input to the P5.js IDE

Above, I am controlling the X-value with a photocell and the Y-value with a potentiometer. The values of the sensors are mapped to the width and height of the screen, respectively.

Screen Shot 2015-10-14 at 3.33.22 PM

Above, I am just playing around with a potentiometer.

Graphing sensor values using p5
Graphing sensor values using p5
Using serial.string
Using serial.string

At this point, I got confused. After implementing the code below, is the graph supposed to smooth out? What’s happening??

function serialEvent() {
  // read a string from the serial port:
  var inString = serial.readStringUntil('\r\n');
  // check to see that there's actually a string there:
  if (inString.length > 0 ) {
  // convert it to a number:
  inData = Number(inString);
  }
}

QUESTIONS

Up until the end of the last lab, all was good. I definitely need some clarification on the serial.readStringUntil.

Can you map values only using p5 or do you have to map values on p5 and Arduino?

Including general feelings of “sort of understanding, but kind of feel like I’m copying and pasting without total comprehension.” I’m hoping these abate with more practice.

Speaking of practice!!!!
I created a p5 sketch that is controlled by a potentiometer that tries to visualize the emotional swings relating to competency while at ITP.

CODE HERE

 

Sad clown
Sad clown…actually my face
Tiny Triumphs
Tiny Triumphs (courtesy of Hyperbole And a Half)

SYNTHESIS: on chickens

At the end of last week, we had a chance to put to use some of our burgeoning skills during a class called ‘Synthesis’ wherein we began to integrate physical computing (Arduino) and computational media (p5).

All 1st-year ITPers gathered on the floor. We were separated into pairs of 2 and given less than 2 hours to build a browser experience that responds to a single button/switch/sensor input.

I was paired with David, whom I had never met until that day. We quickly bonded over our mutual affection for Werner Herzog and decided he needed to be a part of our synthesis project.

Mr. Herzog has a particularly unique view of life. He can take the simplest idea and weave it into a hypnotic, even therapeutic (albeit, dark) narrative with both his choice of words and his distinctive voice.

In our effort to highlight this, we decided to make a digital switch out of a reclining leather chair so that when the headphone-clad user was fully reclined, a soothing Herzog monologue would play. David quickly rigged up a circuit so that two pieces of metal would touch when the chair was upright and then separate while reclining.

We used an audio clip from a segment wherein Werner muses on chickens:

When our switch was closed (or digital input = 1), the user would see the following on screen:

Digital Input = 1
Digital Input = 1

When the switch was open (digital input = 0), the user would see and hear:

Digital Input = 0
Digital Input = 0

Our code worked perfectly, but we had some problems with our switch connection on account of building quickly. As a result, all of our visitors had to find the sweet spot to hear the sweet nothings of Mr. Herzog. The spot was so finicky that even moving the slightest inch reset the track, so our invitation to ‘Relax’ turned into quite the opposite which, in reality, complimented the voiceover quite nicely.

IMG_4542
Rebecca approves
IMG_4543
Jamie ponders chickens
IMG_4544
I test drive

 

 

Pcomp-4: Servos & Speakers

This week, we focused on simulating analog ouputs with the Arduino via Pulse Width Modulation (PWM).
Since the Arduino doesn’t have true analog ouptut pins, we used digital output pins that allow for PWM .

This week, I wanted to better understand coding the Arduino. Right now, I feel like my knowledge here is limited and it makes troubleshooting, particularly with analog outputs, difficult.
For the first lab, I used a photocell analog sensor via A0 to power a servo motor as an output. Screen Shot 2015-09-30 at 3.26.01 PM
IMG_4527

The servo motor lab was fairly straight-forward and the potential applications are apparent to me.

The next lab, the tone lab, was definitely more of a challenge. We built the first circuit in class, the analog input PWM output to a speaker:IMG_4456

 

Next, I wanted to work with the pitches.h library. This is where I got confused. The library was not working for me and so I had to enlist the help of someone from another class. He better explained to me the concept of the libraries and told to even forgo the code and just use the assigned numbers for tones. In the code below, I am simply telling the Arduino to play notes in the notes[] list in a loop.

Screen Shot 2015-09-30 at 3.31.53 PM

This simple exercise helped to clarify a lot and from here, I began to play with controlling the speaker tone using photocells.
The first thing I tried was to map the values of 1 photocell to the pitch of the speaker and the other photocell values would dictate the delay of the speaker. In this way, the Arduino can most closely compare to a Theremin, which modulates pitch and volume. However, we do not have volume control.
I had to decided to divide the values of the second photocell by a factor to keep the delay at a minimum. Changing this value changed the pace of the tone considerably. A small divisor = longer delay = more staccato sound (2nd video). In the videos below, photocell 2 is the bottom photocell.
Screen Shot 2015-09-29 at 11.26.26 PM

Keeping the circuitry the same, I changed the code so that the speaker would cycle through a list of notes that would be dictated by the values of the photocells. I gave the whole system a 100ms delay. (Looking back now, I realize the loop is cycling through i < 4 when it should be i<2 — this may be causing extra delay)Screen Shot 2015-09-29 at 11.25.26 PM

IMG_4523

Lastly, I was wondering how to set up the code to create 2 different analog outputs. I decided to add an LED and map the brightness of the LED to the value of photocell 1, which was also controlling the pitch. For this setup, I went back to the pitch/delay photocell control.
While I did get the LED to light, it didn’t seem like there was much control over the brightness — it seemed either on or off to me. This is one of my questions — Is my code wrong? Am I thinking about the outputs incorrectly?

Screen Shot 2015-09-29 at 11.47.51 PM

Even with this simple set-up, it’s easy to see how many different combinations of pitch and pitch length are possible.

In the coming weeks, I really want to focus on building a more tangible project off the breadboard. This means, I will need to become more comfortable with soldering and using external sources of power.

Right now I still have questions about the libraries — how to input them into the code properly? how to use them?, Controlling multiple analog outputs, And I would appreciate going back over pitch/tone/frequency/etc as these are getting muddled in my mind in practice.

Pcomp-3: Digital & Analog

Week 3 Labs: Playing with Digital Inputs/Outputs and Analog Inputs

Digital Inputs
Digital Input with Switch
Analog Input with Potentionmeter
Analog Input with Potentiometer

I wanted to play with the photocell resistor analog inputs a little more after the labs. First, I had to configure the circuit off the breadboard.IMG_4451

Next, I connected the pieces and connected them through the Arduino.

IMG_4436

I wanted to map the resistance of the photocell to the brightness of the LED so that when light was restricted (ie. a hand passed over the sensor), the LED would shine brightly, with it’s default brightness being muted. To do so, I mapped the code as so that high vales from the sensor equated to a dim light:

int brightness = map(redSensorValue, 800, 100, 0, 255);

I decided to place these photocell circuits into a rubber brain that I had. The idea would be that when you place your finger over the sensor, a particular color LED would light up and then the user could look up a color-coded key to find out what part of the brain they were in.

IMG_4450

Sensor covered, red LED on
Sensor covered, red LED on

For now, I only used two sensors and two LEDs. The pins map as follows:
redPIN = Right Frontal Lobe —  where you make choices between right and wrong and projection of future consequences
greenPIN = Left Temporal Lobe —  forming long term memories

I like the basic idea of this analog input, but I would love to see a more interesting and interactive output. For instance, an associated output that was an image and recording. I even like the idea of getting little weirder and more abstract with the associated output. TBD!

Public Interaction:
Point of Sales Systems, particularly in Coffee Shops

In the last year or so, I’ve noticed a discernible increase in the use of interactive “point-of-sales” software at local stores. My most consistent interaction with them is in coffee shops.

solidsmack.com
solidsmack.com

The interface used by most of the systems is actually quite simple and well-designed. This interactive interface definitely models itself after the finger swiping vision of the future. The cashier rings you up for your purchase, swivels the head of the tablet cash register around to face you and then you are to choose amongst the tip options (or no tip at all) and sign.

In the one coffee shop I staked out, most people were able to navigate this technology seamlessly — everyone is very familiar with the interactive screen at this point. The actual pivoting of the tablet by the cashier is really the most cumbersome technical process.

But by far, the most lengthy and awkward part of this interaction is the human interaction. The act of displaying the cost of your bill to the cueing patrons behind you is strange in and of itself, but then this technology goes the extra mile and almost shames you into tipping. I have personally felt this awkward moment where I stood facing this screen, debating whether I needed to tip this person for my $5 cup of ice coffee and whether those behind me would judge me for my “No Tip” decision.

My observations confirmed that many people feel this same pressure. There is a visible pause when that screen turns to face you and I am certain it is not because of a confusion with the technology. I saw fingers immediately shoot out and then halt. Rather, it becomes a public moral dilemma.

In a restaurant, where tipping is the norm if not required, it is a much less awkward interaction, but in a coffee shop, tips are surely optional.

I am certain this was a deliberate decision on the part of the software designer and I’m interested to know if this technology made a difference in tip totals versus the old school glass jar on the counter begging for my charity.

Consider a common interface:

140320_$BOX_SquareScreen.png.CROP.original-original

All of the tip amounts are placed along the top, while the No Tip option is clearly positioned on the bottom. Even from a distance, it is easy to see what most people choose.

From my research, I found the tip to no tip proportion to be about even, which surprised me. I thought most people would be immune to the guilt.

After writing this, I came across this article which came to the exact conclusion I did — this interface is a guilt trip. Looking at the numbers, Square point of sales software resulted in gained tips on more than half of “tippable” transactions. This equates to a 37% from the previous year (article was written in 2013).

It’s interesting how with this example of a public interactive technology, the actual technology was less distinctive than the psychology behind the design.

Pcomp-2: Making circuits

Week 2 Labs: Breadboard, multimeter, and switches

The beginning: Arduino and Breadboard

LEDs in parallel with button
LEDs in parallel with button
LEDs in series with button
LEDs in series with button
Measuring voltage across a 2V LED with multimeter
Measuring voltage across a 2V LED with multimeter
Potentiometer setup
Potentiometer setup

The labs were relatively stress free, but enlightening all the same.

At home, I assembled a basic switch using my toilet. I wired the toilet so that when the seat of the toilet was touching the back of the toilet, an LED would turn on so as to remind the user to please put the seat back down.

IMG_4338IMG_4337I used instead of the Arduino for power, a 3V battery. I used a single 220 ohm resistor and a single LED. Building a simple circuit was relatively easy in theory, but a different ballgame when it came to actually assembling it. It is much more time consuming than it seems simply due to technical bumps in the road: adjusting wire length, positioning the components correctly, consistently getting the light to turn on, etc.

 

 

 

 

Questions:
How does 238 = 11101110 in binary?
How do you build a portable circuit sans the breadboard with a voltage regulator?