Uncertain at its best, tragic at its worst.

I have been combing over my work at this points version of my current assignment is 300 lines of code. But don’t be fooled it is the 5th version of the current work, I have restructured the data, I have used different libraries and used different methods of working it out and it has been the last couple weeks of my life, looking at endless lines of codes, wondering why it went wrong, why doesn’t work.

It is always the best experience when I show my work to my programming professor and looks at it and within two seconds of peering into the nest of code I have created, he spots it with extreme accuracy, smirks and then shows me that hours of my life is equivalent to minutes to him. To be fair, he has been more alive than I have but it is his experience I want.

Back to my work. It has been a tremendous undertaking that I of course misconstrued to be a rather simple task, ended up being in a colossal coding nightmare. It is with my own ego that let myself always get ahead of what I want to do compared to what I do.

Currently, I am using a couple of libraries. Minim for the sound input, Toxiclibs for the visual interaction of the particles on the screen, control P5 the buttons, javax.swing.JOptionpane for the popup screen to take a string input from the user. In this project, there are two libraries that I didn’t have experience in using, which should have been my first flag, but I glanced over it overestimating myself and stretching myself very thin.

Minim is a sound library in processing, free and available for download. In my project, I am using it to get the average the sound the room in whatever environment it find itself in. An initial interesting interaction was that I destroyed the first object in the array list after 5000 sound input object after combining the two left-right input of the mic. This function is fully working, I think.

Toxiclibs is an interesting find because it is almost everything I need in terms of the interaction of the bubbles on the screen. Each bubble has its own attraction or repulsion so each bubble will never touch. This is working, each bubble is being created and is interacting with the screen with the way I have created it.  But one part of my project is that after hearing sounds over a certain threshold, it increases the radius of the bubble and I can’t edit or reach the radius of the bubble because I didn’t bother making the bubble object into a class. I wonder if can edit the radius without the need for changing the structure again.

ControlP5 is working as it is the thing that adds the bubbles on the screen after the user pressed it. There is another function that I want it to do, add a button outside the setup method, but it is proving to be difficult. If I try to make a button in the draw method it doesn’t like it one bit. Essentially it doesn’t draw it. I need this button to zoom out of the object, therefore, the button only appears after the camera has zoomed to the object.

The swing library to make a text field to be used by the users.

My main problems currently are the zooming feature, zoom out and increasing the size of the bubble. It should be resolved by Monday.

Advertisements

Another Time.

The biggest concern of my lecturer regarding my team’s work is that maybe we are still circumnavigating the problem and not hitting it directly.

The main problem is that we think we are being fluid because we are changing the final thing.

Our project is helping particularly year ones in BCT, like an icebreaker, focusing on a topic, opening conversations with a visual representation of people talking and show who is contributing, talking too much and not enough. With the topic of stories that we provide to them, so they have a topic to discuss rather than discuss themselves because even if we are not talking about ourselves our answers to certain things especially open-ended question says a lot about us that other people can perceive.

Initially, it being a mobile phone app allowed me to start on making a phone up but then I found out that the level of accuracy I want can’t be found on a microphone on a phone and since it is not precise there are a lot of discrepancies from brand to brand even phone to phone.

During our presentation to the class, some people asked that wouldn’t it be a bad to have such a small screen with the indicator to show people talking and people could miss it if they are paying attention to the person over the phone.

We then changed to a computer Processing release with me doing most of the coding with my partner doing the design part. We ran another problem facing the issue of multiple mic inputs in the computer and the input cost as little as $300. Naturally, we are incredibly rich students and only decided not to buy it because we don’t want to, but this still caused us problems. We needed to change again because there is another technological roadblock in front of us.

We had to ask ourselves, what is the core idea of our project, what is malleable and what is constant features that our final project to have. Since we discovered another problem that is hard to overcome we needed to go almost back to the drawing board. We talked to the TA that we had and helped us one more time. Asked us to question our actual idea for the project. He asked us that if we are still having problems then maybe it is time to transform the original idea to not necessarily simpler but to something more accessible for the people making it, us, and for the people receiving the final product.

Now we evolve the initial question to suit the environment that we have observed and that we have found ourselves in. So we looked at the question of helping the students to identify the people who are contributing a lot, too much and not enough. Asked if it is feasible, found that microphone are fickle things.

So we transformed the heading of our project one last time and headed in focusing in the area of visual representation of a conversation about a particular topic that sits in the people to use the device. The people input a topic they want to discuss and then the screen creates a small bubble with the topic in the middle. A mic picks up the conversation and detects the level of sound and then adds to the diameter of the circle making it bigger. When the user’s press back to the overall view they can create more topics and connecting them and seeing visually what they talked about the most and the least. They can come back to it connect bubbles and see where the growth is.

This is the current goal and I am trying to get the function by this coming Monday.

The reason for this switch is due to the fact now looking back at it, it feels like I was always getting ahead of myself, I act as early as I would have liked when I was presented with live problems I have encountered on my way here. I learned that nothing is exact, from the format of the final outcome to the idea. That doesn’t mean starting over, but things can change, and it should be considered good and not a hindrance, even if it is not a nice feeling to know you almost wasted your time putting effort into one thing to be changed in the end.