Project description for INF5071 / Fall 2009Project DetailsThis project will be conducted in pairs (unless a valid reason for working individually is provided). It should consume roughly 30 hours per person for a total of 60 hours per pair. It has five milestones:
The project plan is not meant to be a formal project plan as taught in software engineering. That alone would consume the entire allocated time. Instead, we expect an informal description that includes the following:
Selecting and submitting a topicWe have provided a list of topics we find interesting, but if none of these topics are of interest to you or are already taken, we encourage you to come up with your own suggestions. The only mandate is that it be related to efficiency in some manner. Wether you come up with your own project idea or select one from the list below send an e-mail to (paulbb at ifi.uio.no) with the name of yourself and your partner and we will let you know if it is approved/available.
Modern 3D-engines such as
Ogre (http://www.ogre3d.org/) and Irrlicht
(http://irrlicht.sourceforge.net/)
support streaming of media content from within the 3D-environment. To this
respect it could be of interest to determine what the exact capabilities of
these engines are. Is it for example possible to dynamically change the quality
of the streaming depending on your viewpoint/distance to the source, and if you
are not viewing the mentioned media; turn it off all together. If such
functionality exists, how will it impact the traffic generated from the server,
how many concurrent streams can the environment support and so forth.
Non-Uniform Memory
Acces/Architecture (NUMA) is a memory design utilized in many
multicore/multiprocessor architectures, where the memory access time depends on
the memory location relative to a processor. Given such an architecture, how do
different loads (i.e., memory access of small and large chunks) with varying
characteristics (frequent access, infrequent, etc) affect the run-time of
different applications? What schedulers work well with such capabilities, for
what applications does it make sense, for which do it not?
A variety of serialization
mechanisms exist for creating a binary representation of data related to an
object (given the OO-paradigm) or a struct, i.e., Java Serialization,
Boost::Serialization for C++, XDR for C. How well do these different techniques
scale given the size of data to serialize, how do they pack the data, i.e., at
bit, byte or multiple of bytes level? Which is more efficient at creating a
small size, which takes the longest time, which is most widely available, for
different architectures etc, and how is the complexity of the given
implementations?
OpenCL is the next generation
framework for implementing General Purpose GPU (GPGPU) applications. Given
different GPU-architectures, how well does different applications implemented
with OpenCL perform? Are there certain applications that run better, why?
Hadoop is an open-source implementation of MapReduce (original implementation by Google). How well does Hadoop perform given different workloads and how well do different problems map to this way of processing data. How much data is required for processing in Hadoop becomes beneficial? How many nodes are required etc. For further inspiration, you can also take a look at the topic suggestions form 2008.Current Pairs and Selected Projects
|