Honors Thesis 1 – an overview of the past 6 months of concepting

Oh wow how does one sum up so much time and work. The project hasn’t even started production yet it feels like an eternity. I’ve gone through 7 completely different ideas, 2 storyboarded films, 16 scripts, countless visual tests, and not enough mediation sessions. Alas, here we are, on what I believe (and hope) to be the actual idea.

At the moment, the story is about Larry, a window washing robot who loves to dance, and his struggle to cheer up a sad office worker by finding the right rhythm.

This is the most updated version of the script, which will likely be revised further this weekend.

I am approaching this project on two levels:

  • making a great story that brings people happiness
  • technically challenging myself to learn new skills

The technical levels

  1. This will be my first project I ever do 3D animated human characters. There are several levels of effort this could involve (from modeling/rigging/type of animation) that I am not sure how much I’ll invest in it.
  1. The largest technical question comes from what software I will animate this in (between Maya and Blender). The pros for Blender are I am already relatively familiar, which will alleviate a lot of time spent solving technical issues. The pros for Maya are I would gain experience in the industry standard software. I still need to think this one through…
  2. And lastly, I am playing with the idea of using AI generated art to create what it feels like to dance in an expressionistic way. I’ve spent a lot of time testing out different AI’s and so far like Nightcafe’s Coherent the best (due to aesthetic of images and ease of use). Here are some initial tests I’ve done with this (I like the third and latest one the best):

I’m achieving this effect by exporting the individual frames of a 3D animated dance (ex shown to the right), running each one through the AI, and then redownloading and sequencing the frames.

Here’s an initial animation test I did in Blender. This is my first ever human animation (I did it from reference footage of myself).

The creative level:

I really don’t have the time to get into the full creative roller coaster of excitement and frustration at this project. It’s been a challenge to hone down this idea. And a larger challenge to choose between multiple ideas I developed at the same time. In the end I chose this window washing one because it seemed most feasible story scope wise (not necessarily technical wise though).

I’m also struggling with the creative feedback process. I feel sometimes that I value other’s opinions too much, in that if someone gives a note I immediately accept it as truth, which has led to a lot of confusion on my part in what the hell I’m trying to say with this film. I will process this thought/feeling more later.

Looking ahead:

  • finalize story
  • decide what software to animate in
  • find someone to collaborate on music with
  • interview dancers on what it feels like to dance

Alright this has been helpful to splurge out my thoughts. There’s a ton more where this came from, but I’m going to go do homework for class now. Talk to ya later.

Maya 13 – quad draw

Modeled a more final version of the wing today. Got very acquainted with the quad draw tool…

I learned:

One must already be selecting an object to “draw on” for the quad draw tool to work. Click to make vertices. Press shift to make the quad. I learned the importance of the center pivot point button on the top menu, especially when one needs to mirror part of an object to save time.

I will learn:

How to flip the normal of a face. I tried what feels like every normal tool, but to no avail. So I resorted to deleting the flipped face and remaking it.

Maya 12 – let’s get back into this

Alrighty I’m prepping for doing an honors thesis short film in Maya and want to take some time to get more aquatinted with the software. So we’re back to blogging woo woo!

Today I started a tutorial on modeling a steampunk firefly.

I learned:

  • Put reference images in ‘source images’ folder so it will automatically show when importing an image.
  • One cube has a width of one centimeter
  • Set reference image alpha gain to 0.5 so it’s less visible and distracting
  • When parenting, remember to freeze transformations (although I still have some weird issues with warping despite this)
  • As a general method to model a complex object, roughly block out the shape in 3D before going in and doing the detailed version. This way one can see the way in which the object will fit in 3D space (and not just on the 2D reference image)
  • While selecting object, hold E + right click -> select ‘discrete rotate’… this will have the rotation click into place (very useful!)
  • Press B for soft select

I will learn:

  • Why the objects morph weirdly when parented object moves (despite freezing transformations)
  • Why mirror object doesn’t work completely for selection of objects that are parented

Web3.0 Research Blog #1

Okay today I continued to look into some Web3.0 stuff. I quickly bounced between:

  • decentraland (online virtual space where people are premiering videos and purchasing land)
  • this great blog for updates for Web3.0
  • this article about the best NFT projects
  • looking into the Imaginary Ones NFT project
  • which finally led me to discovering OpenSea which appears to be the main hub for NFT content… I want to especially look more into this one

I’ll use this list as a starting off point for future research. Until then, I shall go to my summer Italian class 🙂

Maya 11 – a sit down with a pro

The other week I sat down with a professional 3D artist at Warner who graciously answered a lot of my Maya questions and showed me some great tricks…

I learned:

For shading, to clean up the material library go to ‘edit’ -> ‘delete unused nodes’. Within the node graph we can press tab to quickly search/add the ‘aiStandardSurface’ node rather than pull it from the library as I’ve been doing so far. Try to not change the specular color much (keep it at white) and only adjust the specular weight to determine brightness of reflections. The lower the IOR value the more dramatic the difference between the edge reflection and straight on reflection of an object. One sometimes needs to use a gamma node to match the colorspace of an object. I’m still not completely sure how this works, but from what I understand even when one has the correct RGB number value, it can look washed out because the colorspace is linear. But if one uses a gamma set to a lower value (say 0.54) it will correct the color to match better.

For lighting, we can look through selected lights to see exactly where it is pointing. It’s also typically to map a soft box texture to lights to give a bit of roughness to it (especially in reflective objects). One can add the texture in the hypershade graph of the light with the ‘aiimage’ node. Make sure to use HDR images for the texture though which will give a broader range of values than 0-1. Also! One can turn off the specular for lights which hides it in the reflection of objects (that’s so cool!!!). Note: if Arnold gets confused when changing these lights one might need to restart render view to see updates.

MASH is a tool built in Maya to array objects. This allows us to set ID’s for objects which can be used to randomize which object is used.

To link together attribute values with the ‘connection editor’. This solves my issue earlier with changing the samples of multiple lights at the same time!

Typically we shouldn’t parent objects, but rather make our scene structure out of object groups.

One last random concept I learned: alembic files only have mesh data, which can be good when passing files on to lighting artists who wants to start from a more blank canvas (and not deal with the temp lights of previous artists).

Neat stuff!

Doodle Animation Blog #2 – moving on to the second action

Last week we had an acting meeting and received notes from some of our friends on the opening sequence acting. After making some adjustments and 6 versions of the scene we went from this sequence:

To this one:

I’m pretty happy with the change. We made the head movement at the start more deliberate. Added in a pause / turn of the head when Doodle considers what the sign is saying and makes the connection to Cass (which was a very good note thank you Aiden!). I also updated the sign animation, however I know it it still a bit janky at this stage. For now I’ve moved on from this shot. In the final version of this action I know I’d like to tweak the eyeline of Doodle reading the sign, as well as make some minor adjustments to the timing of the body movements.

For the past few days I’ve been working on a sequence of action much later in the film, in which Doodle runs away from being put in the recycling, hides around a corner on the desk, sees the airplane information across the room from him, and starts folding himself into an airplane. Right now I have the first two parts of this action very roughly laid out:

I’m happy with the skidding to a halt, however I think the pushing up could be a bit quicker, and the breathing / head turn needs to be slowed down. That shall be an issue for future discussions though.

Python Day 66 – starting to analyze my own data

Today I imported a .csv file with some data on fulfillment that I’ve been tracking for myself for a couple months. I then tried to do some analysis on it…

I learned:

To use the .iloc[row, column] property in Pandas to select certain rows of data. While it works it also returns the value not in a Pandas dataframe (?):

I am still confused on:

How to return the values in an actual dataframe once using .iloc. Also, when I tried to select for all values greater than 0 with this code:


I got a key error for ’18’. I’m not sure what’s up with that yet.

Looking forward:

I want to learn how to select the values of a certain row only if they have an inputted value (not the blank NaN’s). This will eventually lead me to graphing the values of fulfillment and compare them with the component values of personal time, work time, creative time, etc…

Doodle Animation Blog #1 – the opening action

Today I added the sign, completed the triangles of the legs, and added a bit of animation at the end of this to have the sign disappear.

I’m still not sure about some of the acting moments. Specifically the way he looks up and around to the sign at the start, and the initial pause before he looks. It feels like it lasts too long maybe? I guess I just don’t know what Doodle would be thinking upon first coming alive. I want him to almost take in his surroundings but that will be hard to convey as when we film this in real life we want to be pretty close to him.

Tomorrow I’m going to work on him seeing Cass and then jumping to get his attention.

Doodle Short Film – a project overview

For the past 6 months, my friend Kheyal and I (and a handful of other students) have been working on a short animated / live action film called Doodle. The film is a romance about Alex, who has a crush on another guy, Cass, and uncertain how to say it, he doodles a love note on a notecard with a hand drawn character presenting the message. He slips it into the guys apartment and the film follows this doodled character as he comes to life and runs around the apartment trying to deliver the love letter before being thrown away in the recycling.

We have worked a lot on this project (100+ hours), however, today is the first day that we are beginning the hand drawn animation process. I’d like to document the process as I think it’d be helpful to process thoughts!

What have we done so far?

In short, we’ve completed the script, filmed the first half of the pure live action sequence, done countless animation tests, and created an animatic. A large portion of this project has been both of us teaching ourselves how to do hand drawn animation, as we’re both completely new at this.

Style tests:

Here is the very first concept test done in July of 2021 to see whether the combination of live action / stop motion / 2D animation could even work. Each frame was traced onto notecards from a preset 3D animation found online and animated through stop motion. (This was initially done with the conception that the animation would be done with 3D models and drawn in perspective so it looks like the character is standing up in 3D space. We’ve since moved away from that concept but I’d like to explore it some more in the future).

Just a few weeks ago we completed another test but with our actual 2D character, which is the closest concept we have to what the final film will look like:

Live action filming:

In November of 2021 we filmed a majority of the live action setup for our film. This involved a couple months of casting, rehearsals, location scouting, and all the normal short film production stuffs. We had an amazing crew of Emory students come together to make this happen which was so much fun.

I won’t lie though, there was a period of time right before filming where we almost cancelled the shoot day. The script was still being tweaked and we wanted more time to finalize it. In the end we decided to push through as we had already done so much pre-production and with finals for school coming up we didn’t think there would be another time to film. It was a big lesson for us in the importance of having the script put through as many notes possible before pre-production even begins. Given that we’re all full time students though I’m not too upset at us for working on it until the last minute. We still did an incredible thing by even pulling it together for the day (and staying on schedule!) to film anything. I’m very happy we did so.

Learning animation:

Now this has been the biggest challenge. I don’t know how to fully sum up this process. It’s been a lot of watching YouTube videos, filming reference, testing out different methods of animation, testing out different softwares, and learning an entirely new skillset of drawing emotive motion.

For reference, in September of 2021 we completed our first animation test (from the reference footage of ourselves acting out the motion):

And just today in January of 2022, we finished a rough for the same motion… nearly 4 months later.

In between these two videos has been nearly 60 hours of work practicing and figuring out the process of animation…

(2021 Dec) a layer breakdown of the classic pillow test animation
(2021 Dec) a layer breakdown of my first walk cycle
pushing from the co-director/animator Kheyal

We’ve settled on doing the animating in Blender’s Greasepencil system. While we initially filmed myself acting out all the actions imagining we’d practically rotoscope the movement to Doodle, we’ve switched to doing it frame by frame (with the occasional keyframe) ourselves.

The process of animating:

I’ve found once we get the rough keyframes of the motion, I’ll create a new layer for each body part and rough out the motion frame by frame in between the keyframes. Once we have a rough of the timing, we’ll tweak it and then do a final pass.

We then take this digital animation, and trace over it frame by frame onto notecards. Then by swapping notecards in between photos, we can film and move the character around in the real world!

Mixing together live action / stop motion / and digital hand drawn animation is really the driving force behind this film if I’m being honest. I’m happy that we’ve created a story that can only be told in this format, however that wasn’t the main focus from the start.

A few weeks ago we completed a super rough animatic for the animated portion of the film:

This will be our bible while animating for the next few months.

Speaking of which…


Here is the first pass on Sequence 1 of animation which I’m doing (we’ve split the animatic up into core sequences of 1-6 actions. This is rough and without the sign that Doodle is carrying in his right hand. I’ll be tweaking this for the next couple days while Kheyal works on Sequence 4:

For these rough motions we’re not worry as much about character consistency, as we can redraw that on top later.

Looking forward:

I plan on mainly blogging throughout the steps of animation. This will likely mostly consist of my parts of the animation as Kheyal and I will be in different stages at different points, but please note that I’m not the only one doing this. We’ve split up the sequences so I’m doing 1, 3, and 5. We created a whole document outlining what that means but at the moment I’m too lazy to put that in. Suffice to say this will be a personal project blog so I don’t know why I’m spending so much time making this all make sense.

Anywho, with that.

The End 🙂

Not really. This is just the start…

Maya Day 10 – first animation!

Today I redid the shading for Linky, and did this very simple animation! Maya only crashed twice and froze once… I’m a little perplexed how this is the industry standard at this crash rate hah.

I learned:

While this was mostly review of previous knowledge, this helped me conceptualize how hypershade works. I got experience adjusting shaders, using the workspace, understanding how the output and naming system works with material groups. The material groups were already set from when I imported the model, however I assume the concept would be the same for me to create them in Maya myself.

I learned how to export an animation sequence. I can’t find the button directly in Maya (as it appears less than a year ago they removed the ‘render’ button from the top bar), but I Control+F searched for it and used it that way. For sequences it’s important to change the render settings -> frame/animation -> set to ‘name.#.ext’. And then one can change the frame range.

Oh also!! I realized why some objects kept moving around two days ago. It wasn’t a render layers thing but a keyframe thing! I also learned it’s useful to change the pivot (press D then translate) of the camera to allow for easy animatable rotation around an object (this is actually super neat. I didn’t have to parent it to a null object like in Blender!), and to use the middle mouse button to move keyframes in the graph editor.

I am still confused on:

Where is the actual ‘render’ button? Also Maya kept crashing when I tried to add a mesh light to an object. What’s up with that?