PaperAnalysis / OldPaperAnalysis
Currently in the state of early sketches the project (which is a master thesis project at center for image analysis, uppsala university / SLU. The aim of the project is to visualise a volume of paperfibre for presentation. In the volume segmentation between upper, center and lower surfaces should be visualized (and possible to remove during runtime). Currently the aim is to implement the Paper Analys volume rendering libraries as an extension to TempEngine.
| Explenations |
| DONE || NOT DONE || POSTPONED |
| || || |
Work to be done
- Implement line drawing in 3d.
- Adapt linedrawing to work with non square pixels.
- Create an image of a volume, not necessarily the required one.
- Perform normal calculations.
- Final documentation sweep.
- Test VTK and learn basics.
- Install VTK on office machine.
- Transfer paper data to office machine and try to convert it to a format usable by VTK.
- Finish hitcalculations on the cube.
- Read about line drawing schemes to create a line in the discrete space of the volume.
A version of the API can always be found here. This API is updated irregulary (but usually before and after a big push).
Currently there are several popular ways of implementing volume rendering. I aim to produce realtime images which means that I need ways to handle the generally large sets of data which the program should handle. I will need to check up on memory compression models to reduce the size of memory. Using a traditional rendering method I could represent data as bit-vectors which would reduce size considerably. Interesting options for visualization include:
To produce a user interface for the program I'm inspecting several alternatives.
- Shear Warp, I have not seen any implementation of this method but this (Lacroute, Levoy) paper from 1994 speaks highly of it, being able to render an image in 1s. Considering that this was 94 computer computing capacity has increased a great deal (so has requirements on graphical resultion).
- Splatting, mentioned in the above paper seems to be the second fastest algorithm around (at the time). Which basically involves shooting voxels on the buffer rather than rendering them piecewise. Need to get more information.
- Marching Cubes, which involves creating surfaces out of the voxels and then rendering the surfaces using traditional means i.e. OpenGL. This could be used but the amount of data generated would probably indicate that scaling of the data is needed before rendering. Also this method is probably more interesting when you have data that contains larger volumes (I suspect these volumes will be very stringy).
- GTK, completely open source which makes it free to use on all platforms. It's availeble for both windows and linux. The drawback of a GPL:ed tool is that if publicly availeble my application can't be commercial. (I don't know if this is a problem with my instructor.
- QT, was completely free but the development kit for windows is currently not free which means that it won't run under windows.
- MFC, Microsoft Foundation Classes has the drawback that development under Linux is completely impossible and thus not really interesting (since I do most of my development under Linux).