top of page

9.0 ICM

  • Writer: Astro Lee
    Astro Lee
  • Nov 8, 2023
  • 2 min read

I started this week by messing with the ml5 sketches that was posted on our Github, and my friend catched that it seemed like it was calculating the distance between the fingers to get the stretch of the hand. It turns out, she seems to be right.

ree

And then there was one with a lip wrap? It was awesome to see how accurate it seemed to be. Also had a lot of fun with this

ree

vvv My friend's perspective of me messing with this sketch



And then there was this sketch where it was much more accurate then the one before. It seemed like it was also guessing or recognizing the rotation of the hand as well. Super cool, and I was shocked at how well it tracks the translation of the hand.

ree

And then there was this one and I got excited.. ;

ree

And I had enough fun with the ml5 sketches, and decided to move on to Teachable Machine. Since I had my friend around, I decided to teach Teachable Machine the difference between me and my friend.

ree

(Excuse my language setting)


It was interesting to see the machine fighting over me or my friend, but it gave me inaccurate results so I realized that I needed an extra input.

ree


We now had an option for 'Both'! Since we weren't able to pop in and pop out of the screen it was great to have this option in there.

ree

Then it started giving us

relatively accurate results.




























But as we played around I realized that the color of our shirt was messing with the result.

ree

Same thing happened here.

ree

Then it also was kind of dumb(?) when it came to recognizing blank space. If I hid behind my water bottle it thought it was blank.

ree

But then picked up really well with Jiayi's arm. Red sweater, so anyone with red sweater was Jiayi.


It was also very dependent on whether my mouse was present with the sketch or not. I'm assuming this is more of a HTML issue? Since the sketch was working fine, but I was clicked on to the URL, so the recognition was static (non-active).

ree

And with the final result, it was pretty stable. Except the fact that it's registering more strongly(?) with the color rather than the face / other distinguishable elements. But color also makes sense.

ree

Conclusion: Really realized that there is A LOOOOOOOOOOOOT of data needed to create accurate results. And correct ones too. Now I realize why the data we feed to the system is important. AI and technology echoing the social biases and discrimination seem more imminent. It seems like this issue is a current, present day thing that is already taking place, and I think I am growing to concern more about the labels and what images we feed to this system.


Recent Posts

See All
Thesis First Play Test

The first play test took place in the BallRoom. I tested ml5 vs. OF HaarFinder vision model in a dim light situation, on a mac and a...

 
 
 

Comments


bottom of page