Monday, June 30, 2008
In the book The Story of Mathematics, I learned a bit about the world's first computer program. Though the Analytical Engine of Charles Babbage was never built, there was a person who worked closely with Babbage, and who was able to design instructions for calculating Bernoulli numbers using the theoretical machine. This is considered to be the world's first computer program, even though it was never implemented. That person's name was Augusta Ada King, also known as Ada Lovelace.
Yup, that's right. The world's first programmer was none other than... a woman.
Friday, June 27, 2008
I just received my brand new Nintendo DS - for free! I used the points accumulated on my credit card to get it. It came with a bunch of nice accessories including a hard and soft case, and two games. I'm not too excited about Madden NFL 06, but Ninja Gaiden: Dragon's Sword looks intriguing!
So why am I writing about video games on a blog about computer science? I've definitely talked about games before in posts like this one, this, or even this. I just know that I won't be able to play my new DS without thinking about the theory behind the games! Besides, it looks like I should be able to TA a first year game design class this fall, so I suppose I should be more in the know.
I expect I will report back (eventually) about how I like the games on the DS.
Wednesday, June 25, 2008
Another by-product of reading about Randy Pausch and watching The Last Lecture was the discovery of the most ultra-cool university course ever in existence. You know, the one you wish you could have taken. Building Virtual Worlds throws artists and computer geeks together in teams of four or five to work for a few weeks building a single virtual world using cool technology like head mounted displays and trackers. After they're done, new teams are created and they start again. The results are spectacular.
In slightly more detail, the site describes the course as follows:
Building Virtual Worlds' goal is to take students with varying talents, backgrounds, and perspectives and put them together to do what they couldn't do alone. The key thing is that there are no "idea people" in the course; everyone must share in the mechanical creation of the worlds. Students use 3D modeling software (Maya), painting software (Photoshop), sound editing software (Adobe Audition & Pro Tools), and Panda3D, a programming library originally developed by Walt Disney Imagineering's Virtual Reality studio, to display our virtual reality worlds. The course uses unique platforms such as the Head-Mounted Display and Trackers, the Jam-O-Drum, the TrackBox, the Playmotion, camera-based audience interaction techniques, Quasi the robot, and others.
Looking back to my undergrad years, I know I would have loved this class, but at the same time, I know I would have signed up with trepidation. The thought of working with new groups every few weeks would have been worrisome. Who knows when you'd get stuck with people who just didn't do their share? There's no way to complete a good project in such a short time unless everyone contributes. In the end, though, I think that this fear would have been for naught. I don't think you'd take this class "just for the credit." You wouldn't sign up without knowing about and being ok with the workload. Would it work at all schools? Not likely. But there seems to be something special about Carnegie Mellon and their marriage of art and technology (just look at their Entertainment Technology Center). I think it's ok to be envious!
Enough talk, let's see some of the projects!
Hello World was featured in The Last Lecture and is still one of my favorites. I love how the cute little bunny character and the world created for it suddenly goes dark when it's time to call it quits.
Virtual Reality: Wave of the Future is equally as funny. It plays like an educational film from the 50's, (sort of) explaining why virtual reality is going to be important and popular in the future.
Finally, River Rafting shows how you can make a game instead of an interactive video.
There are many more videos of the worlds created on the course's website. They are all worth checking out! While you watch, you may as well ask yourself... could I do this? What world would I make? I don't quite have an answer for myself, except for some beginnings of ideas...
Monday, June 23, 2008
I recently joined the Systers mailing list and have been fascinated by the conversations taking place there. The variety of backgrounds of all these women in technology is amazing, from undergrad students, to researchers and professors, to women in industry.
Today on the list I learned about the Systers Pass-It-On Grants. Members of the review committee are hoping to get the word out, both to find others willing to support the program, and to let women know that the next round of applications will be accepted soon.
As mentioned on the list:
The program awards small amounts ($500 - $1000 US) to women who need financial assistance for themselves or for group projects that target women in technology. Award winners include women setting up internet access and training for women in Nigeria, recruiting and supporting women in computer science courses, and encouraging girls into computer-related courses. Awards are made in small amounts for women anywhere in the world. For more information, read the Anita Borg Institute's press release about the last round of recipients: http://anitaborg.org/news
/archive/on-line-systers -community-names-latest -recipients-of-its-anita-borg -systers%e2%84%a2-pass-it-on -grants/The point of it is to make small awards to fill the gaps from other funding sources and to award them timely, and to ask the recipients to pass-on the value, monetary or otherwise, to other women in computing, as they can.
Thursday, June 19, 2008
Remember when I wrote about the donor appreciation dinner I went to as a Carleton University student that benefited from donor support? I also mentioned that there was interest in using excerpts from that post with my photo in the final donor report for 2007.
Well, the report is now out! I received my copy as an insert of the Carleton University Magazine. It turns out there is also a version online. Cool! Check it out.
Most of what I've done so far regarding my thesis research has been reading and playing with other peoples' code. I started off playing with matching code written by Rob Hess of Oregon State University. He seems to have created his own implementation of SIFT (rather than using the inventor's freely available binary file) and used the keypoint descriptors to find matches between two images. You can also have the program try finding an appropriate transformation between the two images using RANSAC methods. I started looking at this implementation because it should be easy to use as a base for working with SURF descriptors instead, as well as writing several matching strategies to test.
I was rather confused when two images that seemed very similar were not getting the kind of matching I would have expected (and no transformation was found between them). The images taken were of the same building, with a change in viewpoint that isn't particularly significant.
This image was taken with my digital camera a couple of weeks ago:
This is the portion of the panoramic image that contains the same building. I don't know when it was taken. The panoramic images in this case are stored as cubes laid out in the plane. This is one face of that cube.
What do you notice that's different about these images? The illumination isn't quite the same, but that is not a factor when using the more sophisticated keypoint detectors and descriptors. Look closely at the windows of the building. They appear distorted in one image compared the other.
That's apparently because the focal lengths of my digital camera (6mm) and the camera used to capture the panoramic images (~2mm) are very different, causing a perspective distortion. Or so my profs surmised when we last met. This would mean that one of the images would have to be modified to undo the distortion. It also means that I will be using only images from my digital camera to test the matching code I'm working on (at least for now).
Monday, June 16, 2008
Software usability is pretty important. Where would we be if somebody hadn't invented the iPod? What if your personal computer never evolved past the command line interface? How sad would it be if video game controllers never advanced to become the Wii remote or even have rumble feedback? Life just wouldn't be the same.
I've argued with fellow computer science students about the importance of learning a thing or two about usability many times in the past. I have always felt that the kind of computer science degree we get from schools like Carleton (where I got mine) should include a mandatory course that covers some of the basics, including user and task analysis. My opponents give the argument that a computer science degree should teach computer science -- pure computer science -- and keep all this other stuff optional. I would agree with this if it wasn't for the fact that our degree isn't really pure at all. It includes a healthy mix of practical programming knowledge and software engineering concepts. If we had to devote an entire class to requirements, use cases, and class diagrams, would it not make sense to require students to look at the important stage that comes first? And even if students go on to be researchers or programmers that never touch a line of code in the user interface, would it not still be valuable to know how to think about those who will eventually use their code?
Linux has come a long way since I first really used it at work five years ago. Ubuntu is almost accessible to the masses. Perhaps if a few more of its programmers were forced to take a class on usability, I'd be able to remove the "almost" ;). Seriously, though, I'm impressed with what they've been able to do lately. Since I don't use Linux all that often (gasp!), I appreciate that it's easy to get things done when I get back to it.
In The Great Ubuntu-Girlfriend Experiment, a geek sits his girlfriend in front of a fresh install of Ubuntu to see how easily she can complete some basic tasks. How much she ends up being able to do is testament to the progress of the operating system. Finding the web browser and changing some basic user preference settings prove to be very easy. Creating a drawing and saving it in a few formats takes a little more effort, as does figuring out how to download music via a torrent. Figuring out how to sign into MSN with Pidgin results in success eventually, followed by a few Linux-hating messages to friends.
But there is still more that can be done. Hence that term "almost" accessible used above. The girlfriend had a really hard time installing certain software, including the Flash plug-in for Firefox. I love the package manager for downloading and installing things, but outside of that, I find installing stuff hard, too! It really makes you appreciate the Windows installation wizards that are so standard these days. Handling other devices, including other partitions, has also improved a lot, with some easy-to-use UI shortcuts. Even still, you have to be familiar with some slightly more advanced terminology to know what you need to do, making using even a CD-ROM confusing for the general public at times.
When I visited Google New York in May, several tech talks were put together for us. I was thrilled by the "Life of a Product" panel because half of the people were involved, in one way or another, in matters of usability! Right from the concept, user information is gathered from many sources. You can tell they do the testing properly when you see the double sided mirror stuck between observers and users in the neat little testing room. After a product is launched, Google always make sure to collect feedback about how people are actually using it, and how they would like to be using it. Several panelists reiterated my sentiments about wanting to see more instruction on this kind of thing in school. Always nice to know that Google thinks your ideas are good ideas, too!
Perhaps if topics of usability are better integrated into CS curriculum in the future, new and exciting paradigms can be invented sooner rather than later. I sure wouldn't mind ditching the mouse if something better came along. If nothing else, we should at least be able to count on continued improvements to the software we know and don't love so much today.
Friday, June 13, 2008
The long anticipated trailer for "Gail's Masters Thesis" has finally arrived. The final version should be released sometime next summer, with many updates on the production's progress before then.
That's right, after many "maybe it will be about..." posts, I finally know exactly what it's going to be!
It's definitely related to my mapping idea, but not an entirely direct way. Instead of developing a full, working system (for now), I will be researching one particular way to deal with the relative inaccuracy of GPS coordinates available on consumer devices. If a device's location were known exactly, it would be easy to pull up the geospatial data centered around that point, and augment it onto the camera image using projection theory. Alas, even a meter or two off would make for an unfortunate looking augmentation, with roads and tags not lining up the way they should.
My first thought was to actually do some image analysis to fine-tune the GPS coordinates. Say you know the orientation and tilt of a device. You also know that a building or road is supposed to fall on the image at a certain place based on the alleged camera center. Perhaps you can actually find roads and buildings in the image nearby, and use this knowledge to correct the disparity between the GPS and actual location. This seems to have potential, but it might be somewhat difficult to get the real camera center given that there's no way to know at what height the device is being held.
For now, I will be looking into another method. It relies on having 360 degree panoramic images, such as those found in Google's Street View, available for many locations. This may have seemed crazy even just a few years ago, but look how fast Google has managed to capture such data!
The main idea is to first take a general GPS location to narrow down the search space. We would start looking at panorama images in the general vicinity until a match is found between part of the panorama and the image taken from the device's camera. Since the camera position of the panorama is known, a transformation between it and the device's camera can be computed based on the two images. This calculated device camera position can then be used to augment the device camera's actual image.
The matching between real images and the panoramas is the main tricky part here. What is the best way to represent the panoramic image data? As a cube, like in the NAVIRE project? Or perhaps a cylinder to avoid the seams between cube faces? And what about the best way to represent feature points? SIFT or SURF?
So many questions.
If you don't have much of a clue what I'm talking about, don't worry! I'm sure it will become more clear as time goes on. Just stay tuned to this blog, and I'll keep you as up to date as I can.
Wednesday, June 11, 2008
The School of Computer Science made a promotional video a while ago, and still have it prominently displayed on their website. I am in the video a few times, talking about why coming to Carleton is a good idea. I never shared it here, and since it's now on YouTube, I have the perfect opportunity to do so. So here it is!
*** Warning: This post is off-topic. This will happen very rarely! ***
I have a wonderful friend who lives on my street. She's one of the most giving people I have ever met. She also happens to be head of the Ottawa branch of the Starlight Starbright Children's Foundation, which helps improve the lives of seriously ill children and their families. Knowing this person so well, I can see how much of herself she puts into her work. She cares deeply about the people she works with, and always has a little something up her sleeve to bring delight to all.
Each August, one of the Foundation's most important fundraising events takes place: it's called the Dreamwalk. I took part last year and am going to do so again this year. I managed to raise several hundred dollars last year, but hope to do even better this time around. My husband and I are even planning to give a large donation ourselves, but are still discussing how big to make it. Suffice it to say that it should be over $100.
If you'd like to help me out in raising money for this event, you can do so securely with a credit card at my personal fundraising page.
Thanks for any help you can give, and wish me luck in raising as much money as I can for Starlight!
--- UPDATE: We donated $300. Woohoo! Hope you can help too!
Monday, June 9, 2008
Last Monday I attended an interesting event for the first time: OS Bootcamp (where the OS in this case is for open source, not operating systems). I had heard of these events before, but never made it out until I had the excuse that the latest edition was going to be all about geospatial software (a topic related to my thesis - more on that in the next few days).
OS Bootcamp started as a free mini-conference to help students gain basic skills generally used in open source software development, from web programming to databases, C++ programming to Eclipse. There were five given before the geospatial day, but the latest was the longest, having talks for an entire day. Free food and drinks are always provided, including sandwiches for lunch in this case.
The first set of talks weren't of huge interest to me in that they focussed on business rather than technology. Andrew Ross, who founded OS Bootcamp, began with his thoughts on the business value of open source, focussing on the cost savings of reusing code that somebody else has designed, developed, and tested. Tony Bailetti of Carleton told us about the Talent First Network and the Technology Innovation Management Masters degree. Emma McGrattan explained how Ingres became the most mature open source database available for enterprise use, and how it integrated new open source geospatial libraries.
Things started to get interesting when Dave McIlagga outlined the significance of open source geospatial software in Ottawa. I had no idea how strong the culture of open source apparently is here. Dave asserted that there is a strong relationship between the community, industry, government, NGO's, and academia. He listed the following trends he thinks will be important, particularly in relation to location. I feel they are definitely relevant to my mapping idea.
- Wireless is (or will be) everywhere. This will allow for a synchronization between reality and information technology.
- Consumers want real time information, like weather, traffic, and so on.
- There is also a desire for "real" reality (think augmented reality!).
- Technologies need to be geospatially aware to be able to make good real time decisions, requiring geospatial software to be well integrated with other technologies.
- There is an increasing importance placed on "where," and this context awareness is again important for effective decision making.
As a bit of a twist on the usual technical talks, Scott Mitchell discussed how open source software could be used in academia from the perspective of a non-technical geographer. A fresh perspective is always eye-opening, and it was motivating to hear about how professors in other departments are making use of the high quality software available free these days. Of equal importance are the disadvantages of OSS in this context. For instance, there is no institutional support for this non-standard software, so a professor that uses it must also administer and fix it himself.
Unfortunately, it was at this point that my attention span could not handle much more. I suppose that's why all the other OS Bootcamps are much shorter in terms of length and number of talks. There were several other talks that I should have been interested in, including one that had a great geometry focus, but my mind had wandered, waiting only for the draw for the GPS and Playstation III at the end of the day.
Despite this, I felt the event was really well put together, and am considering attending the next Bootcamp to see if I can't learn a few new tricks in Eclipse. Who knows; I might even be able to convince the husband to tag along.
Thursday, June 5, 2008
I wanted to work on Inkscape this summer, at least a bit here and there. I was trying to help out a Summer of Code student for a while, but my expertise didn't lie as much in his project domain as we were hoping. Other than that, I really haven't had the chance. Or maybe I just haven't forced myself to make the time. Either way, I'm hoping that publicly writing about it might guilt me into ensuring that I fix at least a bug or two before September comes around. Fingers crossed.
"Teach computer science... without a computer!"
CS Unplugged's home page tag line pretty much sums up the New Zealand-based project. I first encountered CS Unplugged on reddit, and got hooked from there. I had wondered if I'd be able to use any of the activities in my mini-course. In the end, I used no less than three of them!
The activities cover a fairly wide range of fundamental computer science concepts. The best part about them, though, is the head fake (if I may borrow Randy Pausch's term). You get kids to play these fun games or lead some zany discussions, so they're pretty much just having fun while learning (or at least being exposed to) some pretty tough concepts. True, some of the more keen students might ask, "So, what does this have to do with computers again?" But all is revealed in the end.
A great example is the activity on Finite State Automata, which I used during the artificial intelligence section of the mini-course. Several kids act as islands, and get printed cards that indicate where a ship may go. There are always two choices: choice A and choice B. The rest of the students start at the first island, and ask to travel along one of these choices. Their goal is to reach treasure island, while writing down the path(s) they used to get there. After everyone has had the opportunity to try it, the class can compare their maps and discuss what patterns of A's and B's can be used to get from start to finish. So while they think they're just going on a treasure hunt, they really just learned about how to find examples of accepted strings to finite state machines. Pretty cool eh?
I also tried the Human Computer Interaction and Turing Test activities during the mini-course. It was a bit harder to convince them of the relevance of the Chocolate Factory design discussion for HCI, but they seemed to enjoy these as well. If nothing else, they had some really good discussions about the topics, so perhaps they'll see the connection to computer science better later on and look back on their experience here.
You can see some of the events from around the world that used CS Unplugged material on this Google Map. My mini-course is on there - see if you can find it! I hope, in the near future, to be able to add more events to this map. In particular, our Women in Science and Engineering group hopes to work on some outreach programs for pre-university girls, so there's a huge opportunity to incorporate a few key CS Unplugged activities.
Tuesday, June 3, 2008
I've been spending some time lately putting together a few options for the new Carleton University Women in Science and Engineering chapter. To make the final selection, we'll be asking the current members of the group for their top five choices. I'm very excited to see which one they'll pick, because I can't decide myself which I like the best.
Click this small and hard to read preview for a better look.
We know our color scheme is going to be green and purple, but the color choices in these previews are not necessarily final. There were also a few variations that I didn't include here. For some logos, I have a "short form" version that will be better suited to promo items like pens or shirts. These are the ones that say "CU-WISE." They will be partnered with one of the longer versions that will be used in letterheads and posters.