Magic Beer Windows [final]

In drastic departure from my my original intention, I decided to resurface an old idea and use AR to enhance it: understanding Beer. For my programming design final last year, I tried to create a visual system for understanding certain metrics (bitterness, color, category) of beer.

Beer is both delicious and surprisingly complicated, as is any fermented product since the final outcome can only be controlled to a point. Beer is also a luxury item which means people are often willing to take the time to understand how it tastes and what goes into the process of making it. This is important in designing an AR experience as people do have to go out of their way to download an app and bring out their phones to use it.

While I know using AR to display information about a product, particularly food-related, isn’t incredibly novel, I think beer is suited for this use since people often take the time to really savor and understand what they are consuming and there is no room on a beer label for such info. [I’m generally for increased knowledge about whatever it is you are consuming] I also think it can be more informal (and less complex) than wine, which is why I started here. While the final flavor of beer results from a number of factors, I think being able to view some of the ingredients and identify categories over time would help create a more discerning palate. It’s like learning the language of beer organically: experiential learning.

At first I had wanted to use some of my previous designs as the image targets for beer, but I realized that most beer bottles naturally have their own perfect image targets: labels.

 

I started with Brooklyn Brewery beers since their labels are well contained and they also have detailed documentation on their products. I also relied on Under The Label for more details. Ideally, it would be cool to make an API that this information could be pulled from and to which all breweries would contribute.

I was next trying to figure out what information to display. Some ideas I was playing with: history, ingredients, hops, malt, yeast, ecological footprint, location, taste. Ultimately I decided on a mixture of these, but I do think that this is an area I could play with a bit, but ended up taking a back seat to some of the purely technical issues and development.

Seeing as I wanted to display multiple sets of info, I decided to dive into Vuforia Virtual Buttons, the idea being that obscuring notable parts of the image targets can cue an event. I thought it might be nice to have a “button” on the physical object since you would be holding the beer in your hand, most likely, so then you would just have to move your fingers around on the one hand since the other hand would have the phone. I got this to work pretty well with my hand.

Some of the technical issues revolved around displaying multiple ‘screens’ of info. I suppose I was having a button code logic issue. I could get the flow of the screens down in one direction, but not the other (ie. You can toggle form hops to food cleanly, but not the other way around).


Ultimately, where I got to can be seen below, with the icons serving as virtual buttons. I also added sounds that trigger with each page, just for some more user feedback.
Bottle opening with app:

Food cue:

Hops cue:

In the future, I would definitely take the notes from Rui (below) to make the augmented images more stable and easier to trigger. The buttons also need some work, since they can be a bit fickle. I would also play with some of the content and design.

Notes from Rui: Use cylindrical image target, add autofocus into code for app

Tango Tango

This week, I worked with Gal to get Tango up and running.

I must say, this week was far more successful, which I think had to do with the fact that Tango comes as a nice package and we have already been through the updating/software struggles.

Gal and I initially wanted to see what dropping movies into an AR setting would look like, but it turns out video playback on mobile devices is not simple or cheap (expensive plugins). We liked the idea of using depth kit video to place walking talking people in the environment that had an apparition-like quality to them.

After scraping the video idea, we turned our attention to animated avatars. Gal had a fully formed avatar of herself from a previous project, so we wanted to see what that image would look like in AR. After rigging her body in Mixamo, we imported the obj/mesh/image files in as as the body and some pre-made dance animations from Mixamo. We followed these guidelines to build the animated character in Unity.

To quickly scan my body, we used Structure Sensor, downloaded the .obj, uploaded the files to ReMake by Autodesk to re-orient the body and compile to an .fbx, uploaded to Mixamo, rigged and went from there.

We played with several iterations and liked the effect of these bodies dancing around in a circle, to add some life to a dull day or room. As an homage to one of the earliest internet memes, we added some tunes. 

FINAL PROPOSAL

For my final project, I am interested in augmenting the human form. I find it interesting how bodies become unintentional actors in AR experiences, how viewing the ‘real world’ through a screen removes you from it. With my final idea, the main thing I want to address in the relationship between the person/body in front of and behind the screen.

Plan A. I know using a whole body as an object target is kind of tough, but I am really curious to see if I can get Vuforia to recognize parts of the body, like the face or a hand. I like the idea of playing with mental perception by turning a person’s hand into a non-human appendage, like a crab’s claw or even “augmenting” the human form: What would it be like to have 10 fingers?

Plan B. A wearable with multiple object targets that change depending on the shape of the body. I thought this could be a kind of game between two people. Perhaps the augmentation could be in words and the other person’s movement style forms a poem for the person behind the screen. Or a song, if audio was triggered.

I am going to start with plan A and move onto plan B if the technical side is not working out.

 

AR + APIs: Experiments in Failing

 

This week was not good.

I spent about 24 hours just fighting operating systems and software updates only to not get my AR experience onto the iPhone. I was getting a ‘product_name’ error.

At that point, I decided to just try to modify your code to try and put my own image target there, which worked.

From there, I built my AR experience from scratch again (despite numerous Unity issues) and tried to modify this code to get some interaction with the weather API. However, I felt like my Unity scripts were running into recognition issues — it seemed Unity wasn’t recognizing my Game Objects.

All in all, I felt like I fought inane details as opposed to learning how to actually call API’s in Unity.

 

AR in Unity

This week, Ty and I started working together.

We had grand plans as far ideas to execute, but realized we were both starting at square 1.3.  Since we both had the basics set up for the Image Target AR in Unity running, we decided to get an Android phone and try object scanning with the Vuforia app.

As it turns out, scanning is a bit more of a pain than we had envisioned. I think most of these pains could have been smoothed over with a proper setup, including proper lighting, rotating dolly, and a gray background. We did get augmentation with the test in the app with a Sesame Oil bottle, which had many differentiated colors on the label.

I took this home to work on and pretty much followed the tutorials by the book, but was having problems.  I have a feeling it is stemming from poor scanning.

Unfortunately, ITP was closed all day Thursday due to weather, so I couldn’t play around with the Android to try to re-scan, so I went back to Image Target recognition. I got that up and running fairly quickly, so moved on to getting the app to run on iOS.

I’ll say now that I’m still in the process, but wanted to post this update. I realize now that I need to update my MAC OS in order to update my XCode in order to get XCode to recognize my iPhone since I’m running the latest OS. I understand most of the steps, but updating my PC OS isn’t trivial…it’s a bit old.

Regardless, the process of sifting through the tedious details and multiple attempts has increased my comfort with Vuforia and Unity.
** Question to ask Rui ** — What is the proper organization of folders and scenes re Unity projects? Where should i keep new packages, assets, etc? Adding new assets in the middle of a project?

While I continue to work on this (goals: 3D object recognition, iOS operation, advance scripts), here is a little bit of the AR I set up in Unity.

Assignment 1: Spatially Augmented Reality

Assignment 01
Create an spatial augmented reality experience:
Find an element in a room or wall and augment it.
Give it life using data, user interaction or any other means.
Remember the 3 characteristics of AR/MR.
   1. combines real and virtual
     2. interactive in real time
     3. registered in 3Dspace 

Jess and I worked on this assignment together.
We knew we wanted to use some object on the floor as our projection surface, so after much scouring and thinking, we decided to use one of the many dress forms.

We decided to use Blob Detection for Processing to track a particular color using computer vision as our interactive input. The idea was to use multiple colors and attach images of different organs to the colors and pin them to the dress form.

We also wanted to funnel the Processing sketch via Syphon to Madmapper, which is the first time we had used either Syphon or Madmapper. Here are some experiments:

Unfortunately, since we were using video in processing, we couldn’t figure out how to adapt our code properly so that the computer vision aspect would show up in madmapper. All we were able to get was the video feed via Syphon.

Therefore, we just decided to fullscreen the processing sketch and adjust the projector. We also just used a single color as a proof of concept.

While we didn’t execute the full vision, it was a good exercise to jump back into some processing sketches and also a decent introduction to MadMapper, which is awesome!