• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • about this course
  • syllabus
  • rubric
  • labs
  • blog

HCI | TechX

August 14, 2020

Homework day 8-Nicole

Description: Once the infrared distance sensor detects the presence of an obstacle, the image displayed in Processing breaks down by clusters of pixels. The clusters are determined by the pixels’ hue.

Video documentation
//Coding in Arduino

void setup() {
  // put your setup code here, to run once:
  Serial.begin(9600);

}

void loop() {
  // put your main code here, to run repeatedly:
//  int sensorValue = analogRead(A0);
//  Serial.println(sensorValue);
//  delay(100);
 
        uint16_t value = analogRead (A0);
        int sensorValue = analogRead (A0);
        double distance = get_IR (value); //Convert the analog voltage to the distance                
        Serial.write(int(distance));
        delay (500);                            //Delay 0.5s
}

//return distance (cm)
double get_IR (uint16_t value) {
        if (value < 16)  value = 16;
        return 2076.0 / (value - 11.0);
}
//Coding in Processing
PImage img;
int rectSize = 10;
import processing.serial.*;

Serial myPort;
int valueFromArduino;

void setup() {
  size (876, 550, P3D);
  background(255);
  printArray(Serial.list());
  // this prints out the list of all available serial ports on your computer.

  myPort = new Serial(this, Serial.list()[3], 9600);
}


void draw() {
  // to read the value from the Arduino
  img = loadImage("keke.PNG");
  while ( myPort.available() > 0) {
    valueFromArduino = myPort.read();
  }
  if (valueFromArduino>0) {
    valueFromArduino=int(map(valueFromArduino, 0,40,0,width));
    //url: www.thebalance.com/halloween-spending-statistics-facts-and-trends-3305716
    for (int x=0; x<img.width; x=x+rectSize) {
      for (int y=0; y<img.height; y=y+rectSize) {
        int index = x+y*img.width;
        color c =img.pixels[index];
        float r = red(img.pixels[index]);
        float b = blue(img.pixels[index]);
        float z = map(b, 255, 0, 0, valueFromArduino);
        noStroke();
        fill(c);
        pushMatrix();
        translate(x, y, z);
        rectMode(CENTER);
        rect(0, 0, rectSize, rectSize);
        popMatrix();
      }
      println(valueFromArduino);//This prints out the values from Arduino
    }
  }
}

Question 1

Based on the reading, Art_Interaction_and_Engagement, write a reflection about the ways technology was used in your project.

The installation we created fits into the type “dynamic interactive”, where human viewers can interact with the displayed image by having their physical presence detected by the infrared sensor, and enjoy their influences to the graphics exhibited. Though the installation might sound interesting, the boredom it can induce if the viewers interact with it over a prolonged period of time defined the type of engagement it provides as merely “attracting”. In this case, once the viewers figure out the pattern and mechanism that the exhibit follows in creating an interactive experience, especially considering how technology nowadays has developed to so advanced a state, they can soon lose interest. 

Question 2

Is this an engaging interaction?

In a short time frame of minutes, this interaction may be engaging; it introduces an interesting mechanisms that combines computational media and physical computing. However, it lacks further appeal to the audience considering how distance is the only factor that influences the artwork, and that the graphic motions are modeled under one single pattern. 

Question 3

What would you do to make it more engaging if you had more time to work on this project?

I would probably introduce the concept of “calibration” to this program; instead of having the scale of graphic breakdown (my definition of having clusters of pixel being separated from each other) being positively proportional to the distance the viewer has with the camera, the viewer would have to stand at a designated distance away from the sensor in order to see the full image. This means that, if a viewer facing the sensor is five steps in front of the perfect position, he would see the same thing as the next viewer if that person is five steps behind the position. Normally, the clusters of pixels would break down, change into random colors, and roam in a random direction in an unpredictable and uninterpretable manner. It wouldn’t provide instructions to the viewers about whether the interactions can be made possible, but rather, having them exploring and experimenting. If a person unintentionally passes through the sensor, he might realize a change upon the displayed graphics as to be curious of learning more about it. 

Filed Under: blog

Reader Interactions

Leave a Reply Cancel reply

You must be logged in to post a comment.

Primary Sidebar

Recent Posts

  • Homework Day 8 – Simon Hsieh
  • homework day 8 – Michael
  • Homework-day8-Sophie
  • Harry Hao HW DAY8
  • homework day 8 – dora

Copyright © 2025 HCI | TechX on the Brunch Pro Theme