October 21, 2016

After taking a recent look at the source code and live demo, I've decided to revisit this project.  I believe that with 2+ years of professional experience under my belt, I will be able to make this project work in a more efficient manner with cleaner and more readable code.  I'm also hoping to make some of the algorithms implemented in this project more functional, and less buggy.  New in-progress demo to come soon.

April 27th, 2014

This week I worked on updating the UI and implementing Inverse Kinematics.  I simplified the navbar to contain only two buttons, one to show/hide the help text and the other to show/hide the information and buttons relating to the mannequin.  I think this helps simplify the application to make it look more streamlined and easy to use.  Additionally, I continued working on implementing Inverse Kinematics.  I restructured some previous code relating to the joint limitations so I could reuse it in the cyclic coordinate descent function.

I ran into some trouble trying to get the <x,y,z>  coordinate of the effector point based on where the user's mouse is located.  The point tends to rapidly disappear into space when the camera is rotated.  Additionally, I need to restructure the camera controls, as they cause the camera to move when the user is click/dragging the effector point. I may write my own code to handle the camera to have it only allow motion when a key is pressed at the same time.

I plan to finish implementing Inverse Kinematics, as well as continue to clean up the UI this week so that everything will be ready by the demo on Thursday.

April 20th, 2014

This week I worked on setting up some of the initial pieces to implement inverse kinematics.  First I loaded an effector point, set its color to be blue (to differentiate it from any other joints, and to make it easier to spot).  Then I set up a couple of radio buttons that would allow the user to toggle between fk and ik.  This is set up such that the effector point is only shown when the inverse kinematics button is selected.  Last, I set up a way to track the location of the effector when it is moved.  This is setup on a click and drag event and currently just prints the new x and y location of the point.  In the next couple of days, this will be set up with the camera to calculate out the relative <x,y,z> coordinates in the viewport.

Next week I plan to implement the CCD algorithm for inverse kinematics.  Additionally, I will need to figure out a way to handle the click and drag of the effector point without having the camera rotating as well.  If possible, I want a way for the user to be able to manipulate the camera and the effector without one effecting and altering the use of the other.  I also hope to clean up and separate out the UI a little more to keep the navbar from being so cluttered and busy.  I am considering adding buttons that will allow the user to toggle which information they want to see.

April 13th, 2014

This week, I've been swamped with my fine arts thesis project. As such, I haven't been able to spend as much time as I was hoping working on this project.  Mostly, I just reviewed the concepts and the algorithms for implementing inverse kinematics, and started thinking about how to best incorporate it into my code.  I've also been considering the problems ik may cause with my current UI, because by allowing the user to click and drag the effector point would interfere with the camera controls and may cause the two to interact in odd and unexpected ways.  However, I don't want the effector to be controlled with the keyboard because that would limit the ease of use, and make ik seem less effective or useful in the app.

Next week, I'm planning on implementing inverse kinematics, and trying out a couple different ideas on how to allow the user to effectively control the effector point.

April 6th, 2014

This week I worked on cleaning up my project for the Beta Review.  I finally got around to implementing some of the minor functionality that I had pushed to the backburner these past couple of weeks.  This included updating the UI, toggling between two models, and starting to clean up and better organize my code.

In terms of the UI, I created input fields that allow the user to set a value for the rotation of the selected joint, with it updating in the view on blur. Limitations have been set up on all joints so that the user cannot set the rotations to be out of range.  On selection, the upper navbar displays the rotation angles, as well as the name of the joint selected. I also created a button that toggles between selecting the root of the character and the previously selected joint.

I also added functionality to let a user toggle between two different models.  It stores the positioning of the hidden model so that the pose doesn't get lost when toggling.  Eventually, this will be set up so that one model has male proportions and one has female proportions.  Currently, having the two models load with the page slows down the initialization of the app.  I'm going to look into the possibility of loading the second model with ajax or some other method to increase load time.

Additionally, I began sorting out my code into smaller js files to increase readability and make my life a bit easier.  I plan to continue to do this, as well as start to refactor and comment my code over the next couple of weeks.

Through the next couple of weeks, I'm going to be focusing on implementing ik and cleaning up my app.

March 30th, 2014

This week, I added joint limitations to the shoulders, hips, neck, and lower back.  I'm still struggling to get the full range of motion because in some cases, having rotation being calculated in radians is causing -45 and -135 to return the same value and is therefore not allowing me to differentiate between the two in terms of a conditional statement.  I plan to troubleshoot this problem over the course of the next week (before the beta review) to hopefully solve the issue.

Additionally, I began cleaning up my code this week.  As the code base has grown, it's gotten messy and hard to read.  I decided to make my life a little bit easier by fixing all of the tabbing, and over the course of the next couple of weeks I plan to separate out the code into different js files based on their function.  This should cut down on the number of lines of code in each file and will give me a chance to try to consolidate any repeated/duplicate code and to add comments to increase readability.

Over the course of the next week, I plan on adding some additional UI elements that will allow the user to select and rotate joints based off of scrollers and drop down methods.  I think I'll probably run into an issue setting the rotation of the joints to a specific value instead of just adding to the current rotation.  I will also run in to the problem of limiting user inputs to be within the ranges enforced by the current limitations.  If I can make use of sliders instead of typed numerical input, I think it will help with this problem.

March 23rd, 2014

This week I worked on adding joint limitations to the elbows and knees.  The knees can now only bend 5 degrees forward and 150 degrees backward (along the x axis) and cannot be rotated along any other axes.  The elbows are not fully working yet as they can only rotate 90 degrees forward along the y axis.  I ran into some difficulties getting the elbows to be able to rotate 150 degrees forward as the conversion from degrees to radians is making the math weird.  I plan to figure out the reason why and fix it by next week.

I also edited the joint structure so that the pelvis acts as the root for the mannequin allowing the lower back joint to control the bending and motion of the upper body.  I need to adjust the code to prevent the lower back joint from being able to translate, and instead set that to work on the pelvis.  I also want to make a button that will select the entire mannequin instead of having click selection for the non-joint pelvis object.

Next week, I plan on implementing the ball and socket joints in the shoulders, hip and neck.  I also plan on reviewing the inverse kinematic algorithms to see which would work best in my app and how I can make it work properly with three.js

March 9th, 2014

This week I read "Fast and Easy Reach-Cone Joint Limits", and Efficient Spherical Joint Limits with Reach Cones to gain some background on how joint limitations work and how they can be applied to a project such as mine.   After reading the articles and searching for a couple more, I began to think about how to apply the limitations to the system I have in place for controlling the mannequin.

In addition, I also added in the rest of the modeled pieces of the mannequin this week.  He now has both of his legs (with hip and knee joints).  After loading the model in, I realized that I may need to add a couple of joints around the root joint to allow for the torso and the pelvis to be rotated separately and in different directions.  The way the joint system is set up currently does not allow them to do so, and as such results in a very stiff and awkward looking character.

Next week, I plan on 1) implementing joint limitations, 2) adding a second model (what will eventually become the female character) to the scene, 3) including a way to toggle between the two, and 4) setting up a more explicit UI to select and rotate joints (in addition to the mouse and key controls being used currently).

Happy Spring Break!

March 2nd, 2014

This week I didn't make as much progress as I had hoped, and worked predominantly on bug fixes and cleaning up my app for the alpha review.  The main things I worked on were 1) styling, 2) bug fixing selection, and 3) giving the figure his second arm.

I added styling to my app in the form of adding a top-bar with a dropdown containing the 'help'/'how to use' text.  By moving the help text into a dropdown, I was able to make the canvas take up a larger portion of the screen.  This lets the model load at a larger size and makes it easier to manipulate.  In creating the top-bar, I had to move the canvas down about 40px.  When I did this, a previously undiscovered bug became obvious.  My selection was not accounting for the position of the canvas on the page, and therefore, when I moved it down 40px it was reading a mouse click as clicking 40px higher that it actually was.  This meant that joint selection stopped working because it was looking for objects above the ones I thought I was clicking.  To solve the problem, I just subtracted the top distance from the event click when calculating the mouse position.  Now selection is much more accurate and the mouse click actually selects what it's supposed to.

Additionally, I gave my model his second arm.  I ran into some problems earlier in the week getting the second arm placed properly, to the point where his elbow kept loading in the middle of his upper arm.  I eventually was able to solve the problem by scaling the objects by -1 in the x and y directions and tweaking the translation values.

Next week I plan to read up on and begin to implement joint limitations. Additionally, I plan to model and add in the rest of the body, including the pelvis, legs, hands, and feet.  If I have time, I may also make a quick "female" model so the user can toggle between the two genders.  At this point in time, the models may not be proportionally accurate, but being able to import and toggle between two models will set up the framework that will allow me to easily replace the rough models with the more accurate ones.  

February 23rd, 2014

This week I worked on (1) joint selection, (2) moving/rotating objects and their children, (3) trackpad controls, and (4) importing a smoother version of my model.

I set up joint selection so that when the mouse is clicked, a ray is cast through the scene.  When the ray intersects an object, it adds it to an array containing all of the intersected objects.  Then, it checks if any of the intersected objects are joints, and if so, returns the closest one.  If a joint was intersected, and if it is different than the previously selected joint, the material of the old joint gets set back to the default and the newly selected joint gets a color applied to it.  Once a joint has been selected, the user can rotate the object chain starting at the selected joint.  This means that when the shoulder joint is rotated, the upper arm, elbow, and lower arm also rotate. If the root is selected, the entire figure can be rotated or translated.

In addition to selection, I also set up trackpad controls to manipulate the camera.  To do this, I used the three.js plugin that handles trackpad controls.  This plugin zooms in with trackpad scrolling, and rotates on the click & drag.  I switched the camera controls from the keyboard to the trackpad to free up keys to use to manipulate the mannequin's joints.  I may wind up adjusting, or writing my own trackpad functionality in the next couple of weeks, to ensure the control works how I want it to, and to prevent a mouse click being read as a camera rotation instead of a joint selection.  

Finally, this week I imported smoother version of my model. The smoothing was just done in Maya, and then saved to an OBJ file.  While working on joint selection, I realized that all of the objects I wanted to import needed to be saved at the origin of the Maya scene, and then once imported have their position set relative to their parent object.  Because I had to go through and re-save all of the Maya objects, I decided to smooth each mesh to make it look more visually appealing in my app.

Next week I plan on working on a simple, relatively bare bone UI to handle joint selection and manipulation in a visual way (in addition to the current trackpad/keyboard set up).  I will also begin reading up on joint limitation algorithms, and starting to look for pose data I can later import and apply to my model.  

February 16th, 2014

This week I met up with Norm to discuss my project.  I originally just wanted to check in with his to see if my project was reasonable and my timeline for the project made sense.  However, during our meeting, we wound up narrowing the scope of the project and changing around my schedule.  Now I won't be skinning the mannequin to show muscle definition, instead I'm going to work on a fk and ik to work with both spherical and polygonal joint limits and on a correction/recovery algorithm that will move any joints back within their limit if they go out of range.  The hope is that by focusing on these pieces I'll have something demo-able for each review and a clean, finished product by the final review.

Additionally, I worked on getting a partial woody character loaded from a Maya OBJ file.  The decision to focus on this first came from the idea that having a woody character loaded into the scene would give me a multiple object character to test selection and fk on.  Loading a file took longer than expected, however, because in order to test or run my loader code, I needed to setup a server. I chose to use Node.js and Connect (a middleware framework for Node) to setup my server because I just needed to be able to load my static index page with the server.  Because creating a server meant I could no longer host my page off of A Small Orange (where I host my personal website), I decided to push my code up to Heroku.  This is where I struggled for a while.  Once I was able to get Heroku to recognize that I was pushing a Node app -- I forgot that I needed a package.json file describing which versions of Node and Connect were being used -- I couldn't seem to get the Heroku page to load and show anything.  After a lot of looking in the wrong direction and thinking the problem was coming from sending the HTML file, I realized that I had forgotten to set my port to be 8080 locally OR the Port env variable defined by Heroku when I pushed it to production  (face palm.)  Once my server was set up and functional, I was able to go back to my OBJ loader.  I initially created an OBJ file for a multiple object character from Maya and was able to get it to load into my scene successfully.  However, it appeared to group as a single mesh, instead of being multiple objects.  So I exported each object from Maya as separate OBJ files and loaded them in individually. This seemed to work successfully.

I also started working on getting object selection working properly.  I used this demo as reference, and plan on breaking down the code more fully to better understand what it's doing and why.  Object selection seemed to work somewhat when I first started working with the OBJ loader.  When I loaded in the OBJ file containing multiple objects, I was able to click on the a part of the mesh and drag it around the screen.  However, it would move all of the objects together as one, instead of as separate pieces.  I plan on trouble shooting this more next week, and am going to look into Three.js's built in raycaster more so I can get a better understanding of how it works.

For next week, I plan on: (1) getting selection working properly, (2) starting a rough version of fk for character, (3) setting up really rough UI to aid in showing off my project in the alpha review, and (4) cleaning up my code, breaking it into different js files and separating out concerns to make it more readable (as it is a bit of a mess right now). 

February 9th, 2014

This week I worked on (1) incorporating Three.js into my project, (2) implementing a perspective camera that can be rotated and translated along the x, y, and z axes, (3) setting up simple keypress controls to control the camera, and (4) began brainstorming a way to incorporate the camera controls into the UI.

Three.js is an open source toolkit that abstracts out some of the details of the WebGL API by breaking the scene into broader objects such as meshes, materials, cameras, and lights. It is fast, supports interaction, and has a built in libraries to handle a lot of the matrix and vector math. I believe that using this toolkit will allow me to get my project up and running faster as I won't be as bogged down trying to learn all of the nitty gritty details behind WebGL.  

This week, with the help of WebGL: Up and Running by Tony Parisi and the documentation at threejs.org, I created a scene containing a grey cube, a perspective camera, and 2 directional lights (one red and one white).  I then worked on creating a system to allow the user to control the camera's orientation and position.  Currently, the user can control the position as well as the yaw, pitch and roll of the camera using keys on their keyboard.  For instance, "a" will move the cube towards the left of the screen, "d" to the right, "w" towards the top, and "s" towards the bottom.  "e" and "q" will zoom in and out (respectively).  The camera rotation is controlled in a similar manner using the "j", "k", "l", "i,", "u", and "o" keys.  I began brainstorming and sketching some possibilities to improve this method of user interaction to be less keyboard based and more GUI based.  **Sketches will be added to this blog post when I get to a scanner and can post a digital version**

Next week, I plan on building a Woody character and adding him to the WebGL scene.  I'm also planning on solidifying my UI plans and implementing a user GUI to go along with the keyboard commands, as well as getting the orthographic views to work properly (currently when the user selects orthographic view, the cube disappears from the scene).  In addition, I will continue to read up on WebGL and Three.js in WebGL: Up and Running, Learning Three.js, and Professional WebGL Programming in the hopes that I will have a firm understanding of the technologies by the time I need to start implementing real algorithms.

As usual, the current state of my project can be viewed by clicking on the Senior Design Demo link in the sidebar to the right.

February 1st, 2014

For this senior design project, I am making an online version of the posable wooden mannequins used by artists. As an artist myself, I have found myself in situations where I have wanted and needed a model and haven't had one on hand. Searching online yielded no flexible posing apps that gave the user the ability to fine tune the position of a model. Many mannequin web apps only let the user choose preset poses, or move joints to 2 or 3 predetermined positions. Additionally, the posing apps typically used Woody style mannequins, and as such did not have any muscles to show definition or distortion as the character moved. Any posing systems with more features and flexibility required a download and typically were accompanied by a large price tag. I intend to create a web app that will give users the ability to (1) select from a range of predetermined poses, and (2) move the joints of the mannequin to fine tune the pose. In this app, the mannequin will (1) have a wide range of motion, while (2) constraining each joint's movement to match that of real human, and (3) show realistic looking muscle definition as the mannequin is moved.

This week I worked on focusing my initial idea down into a more feasible and concrete project for this semester.  Instead of creating a muscle system to limit joint motion and show muscle definition, as I had initially planned, I decided to use spherical joint limits to limit motion and skinning to add the muscle detailing.  This decision was made after learning that using a muscle system would likely take me most of a semester to implement, and would prevent me from creating a full body posable mannequin in my app.  Because I believe having a fully posable mannequin in full featured app is more important that being anatomically correct in the building of the mannequin, I decided to find different methods to achieve the same desired results of the muscle system.

Additionally, I spent most of my time researching and experimenting with WebGL this week.  As I have never used WebGL before, I looked up a couple of different tutorials online and found a few highly rated books to use as reference.  The main tutorial I began working through was "Getting Started with WebGL" on the Mozilla Developer Network.  I expected to get past rendering simple 2D graphics this week, but ran into silly errors caused my shaders and matrices not being read properly by my JavaScript script.  Eventually, I was able to get a white 2D square to appear on a black background (viewable under the Senior Design Demo tab).  

In addition to this tutorial, I picked up WebGL Up and Running by Tony Parisi from the bookstore.  In skimming through the content, I realized that using a JavaScript toolkit like Three.js will allow me to more easily set up my camera and user interactions so that I can spend less time implementing these more standard aspects of my project, and more time implementing the more complex kinematics, joint limitations, and skinning.  As such, I'm hoping to learn about 3D objects, and get user interaction and the camera set up by the end of next week.

I've also ordered Professional WebGL Programming: Developing 3D Graphics by Andres Anyuru from Amazon, and am expecting it to arrive within the next couple of days.  As WebGL Up and Running focuses more on getting readers up and running quickly on WebGL through the use of Three.js than on the explicit details of 3D graphics programming, I decided Professional WebGL Programming would fill that gap.