We are nearing the completion of the first half of the second semester and it is ramping up.
Next week there is a presentation in front of the class regarding the progress of the project and my project is still progressing very nicely. Speaking in class again I can’t help but think the people/class will only know me as the one person who got away with making the same project over the course of 2 years but at the very least when it is presented for November it will finally look like a project that has been in development over the course of 2 years.
The development phase is close to finishing and now we are looking at recruiting people for the testing phase for when we come back from the break in around 3 weeks. So yeah it is becoming more real as the days roll by and my expectation for the project and rises so yeah.
In terms of the development, most of the front end is done. Most of the buttons for adding people, agenda/topics, removing topics timers for both individual bubbles and timers for the overall meeting is done and adding the classifier for customisation for the Natural Language Classifier
We are also halfway in the reporting and its design.
The plan is going to plan, which is nice for once
My three keywords
Sensory Visualisation, Power Balance and Future Space Robots
Speaking with Ben, our advisor, he gave us direction on a vision of our program and take it to real life.
We conducted a physical run of what we think our game will be on the bare minimum. These are some of the rules that we came up with
- 2 People
- Initially, 1 will be blind and 1 will deaf (there will be no switching of sense for this experiment)
- No outsider can help the two
- have to have light obstacles to cross
- Have an objective, in this case a red shiny button
- only the blind can reach the button/objective
Unique to this run.
- speech is stationary, ( can’t move with the blind)
- information is shared using the spoken English language.
For this run, what we observed and found out
- break down in interpretation – since the English language is diverse, small, little turn, rotate and all the other commands is very subjective, little can mean different actions for different people
- trust issue – our initial guinea pigs knew each other fairly well but was discovered that the blind just have to trust the voice to take the blind to the destination and being in real life walking with blindfold is hard even if you know the room enough, let alone blindfolded walking over obstacles.
- maybe don’t have words, if we are running with robots, beeps can work and still be interpreted differently
- have instructions that cant be misinterpreted – but removes the players unique style.
During class time, we had the chance to visit Spark for an Ideation session and tour of the building with Amelia.
We were given the chance to take part in an ideation session on the spot to propose a product that Spark could produce in the future.
We proposed an IOT device assuming the world reached the ability that most of the world appliance and objects are connected.
If everything is connected then there is data, if there is data there is predictability. If we have so much data that is all around us waiting to be caught and analysed there will be patterns to be decerned again going back to predictability.
The example we used was between the fridge and the oven/stove top all being controlled by a phone to determine which objects should connect on a data level.
It will track the contents of the fridge, the cooking time of the food in the oven and on the stove.
The beauty of the thing we came up with is that it isn’t limited to a fridge and cooking options but how much water we use in the shower and we represent in milk bottles to show how much we are using which is an important visualisation when it comes to drought countries or what does your sleeping position say about when you are about to wake up.
It is personalised, not two people cook the same and not two people sleep the same way or use the shower the same way. There ware data here that is waiting to be mined and examined.
Personally I would extend this project using AI particularly IBM Watson because I know more about it, and the fact that we can use AI specifically for pattern recognition.
These data will be relevant to us on an individual level.
This is what our wall of ideas look like
And So the stories goes
initially my object for the interrogation is a photo with my friend on
my birthday party before she left, I did the process for it and talked about it with Clint. But she is not a big fan of videos and photos and didnt want her face as my art work.
Its not much of a problem, even if i dont have the original image to be shown i have linked above the final outcomes of the process and it turned out well. It doesnt show her face so its okay, i twisted and interrogated the image until it is almost unrecognizable.
So it is not a total loss since i can still apply the process to a new image
this is a ticket of the last movie we saw so it is just as significant to me.
it is strange, i never placed memories into images and only in moments and this was pretty important moment in my life
Life is a progress bar
transferring all principle shots on to pc then to a hard drive for back up. because i learn from mistakes
for the shooting of the film has been pretty crazy. we have finished all the shooting for the five minute film we have lovingly called, Destinatio: Mars. It was a team effort coming up with the idea but i was the Head writer for the team and also the director of the film.
the writing process i really found really fun and it is the first time i have written a script for anything. Also the first time directing. It is really fun playing with characters names back story. But one of the hardest thing is dealing with the creative input of the writer and being based on facts. The balance of making it seem it could happen is very hard to combine. You either tell a boring story that is predictable or go overboard with the future and do time travel or something.
I wrote the script after the first weekend of the semester and the following studio lecture we were told to make it more comedic and i was panicking because i wrote the with a serious undertone. But my team was very supporting and decided to keep the script with the team editing to make it more relatable and more jokes.
Shout out to my team member and Director of photography Kavita for being so great. I yell angles and she gets it thanks!