Author Topic: I need someone who can decompile programs  (Read 74 times)

no1uno

  • Global Moderator
  • Foundress Queen
  • *****
  • Posts: 681
I need someone who can decompile programs
« on: March 20, 2010, 08:21:41 AM »
I'm trying to design an amateur Raman Spectrometer, and provided my lasers are delivered (and so too the grating films), then that should be well and truly feasible...

I have found several programs (some freeware) that can do "part" of what I need them to do (I'd need a proper installable/distributable file, which can interact with the TWAIN service and then further process the images from the camera/spectrometer, plus I'd need to be able to install and register the spectrometer).... But none do it all (or even come close) without extensive manual fucking around and it really is something that could be automated (convert image format to 8-bit grayscale, then graph pixel v intensity of the image is really the basics), the use of a neon light and a split beam (through a reference standard - ie. the solvent) and the analyte, all being imaged at once on a single ccd image (split into 3), one image, one grating, three sources (all of which are separated physically) would allow for the determination of the spectral speaks, based upon the known spectra of the neon bulb, while the raman spectra of the reference sample, when compared to the analyte, would show the differences, which would allow for the determination of the 'actual spectra' of the analyte less solvent.

But to cut a long story short, I need to work out a number of parts - getting the full image converted to 8-bit grayscale, then plotting pixel v intensity for each of the three images on the whole image is the major project atm (I'm thinking of iterating through the image line by line, pixel by pixel to get intensity - slow, but not with the massive amount of memory available nowadays, especially given the resolution is only 300K pixels). If I can set up the spectrometer properly, I should be able to just take 3 slices of the picture, about 10-20 pixels wide (automatically), full length - then plot pixel position v intensity for that.

Anyone got anything to contribute?
"...     "A little learning is a dang'rous thing;
    Drink deep, or taste not the Pierian spring:
    There shallow draughts intoxicate the brain,
    And drinking largely sobers us again.
..."

badger

  • Larvae
  • *
  • Posts: 24
Re: I need someone who can decompile programs
« Reply #1 on: March 20, 2010, 10:32:52 AM »
If you speak Python or are willing to learn, the Python Imaging Library will do exactly what you need -- or most of it anyway. You'd read in the image as an object, which you can iterate over and do various transformations on (grayscale, enlarge, reduce, whatever). To produce the graph, you'd want to iterate over the pixels in the Image object and probably use matplotlib (also a Python library) to render your graph.

I don't quite get what you mean by "pixel position vs. intensity", can you explain that a little more?

Anyway, the code to read in an image and grayscale it will look like this:

Code: [Select]
from PIL import Image, ImageOps
im = Image.open('filename.jpg')
gray = ImageOps.grayscale(im)

To iterate over all the pixels in the image one by one, you'd then do this (simplified for readability, you could do this more compactly but this will work):

Code: [Select]
x = gray.size[0]
y = gray.size[1]
for i in range(x):
    for j in range(y):
        print gray.getpixel((i,j))    # this will just print out the value of each pixel - the double parens are important, the argument is a tuple

Anyway, if I can wrap my head around what you want to graph, I can show you how to do the appropriate matplotlib code too. I'll go read up on Raman spectroscopy.

no1uno

  • Global Moderator
  • Foundress Queen
  • *****
  • Posts: 681
Re: I need someone who can decompile programs
« Reply #2 on: March 20, 2010, 11:32:18 AM »
A Raman Spectrum (or any other spectrum) is based upon the intensity of light at a given wavelength (generally in Angstroms/nanometers)... When working with grayscale images (8-bit), the intensity is one of the variables for each pixel, while using grayscale avoids the issues involved in manipulating RGB (where several RGB codes can be used for the same color/wavelength).

If one were to align the bottom axis of a 300K sensor up with the spectrum (640x480 pixels (640 x 480 = 307200 pixels)), then you could, hypothetically, align the visible-NIR wavelengths (the ones visible with a CMOS/CCD camera anyway, from about 400-around 900) so that you get one wavelength (nm) / pixel.

You then align the long axis up vertically to that, so that over the full 640px you split it up into 3 - so 3 x 310 (which leaves 2 x 5px for the separators) which would be the pixels you'd expose to the 3 different light sources (2 from the 532nm - one that was passed through the reference sample and the other through the analyte+reference and one from a neon bulb).

We know the spectral characteristics of a neon bulb, which allows us to callibrate the spectrometer, ie. the peaks are at known wavenumbers, thus given we are only using a single CMOS/CCD sensor and a single grating, are also going to be the same positions in the other two parts of the picture. As the incremental scattered wavelengths (whether scattered by a reflective grating or refracted by a transmission grating) are for most of our purposes, increase in an essentially incrementally linear fashion (ie. they increase in a way we can predict), we could (and many companies do) utilize CMOS/CCD sensors to collect the spectrum assigning either one, or a bunch of pixels to each wavelength (depending upon the resolution of their sensor and the range required).

Thus, the pixels position on the short axis of the CMOS/CCD image correlates effectively (within limits) to its wavelength in nm, so if we plot the position of the pixel on the bottom axis (corrected to correlate to the known spectral response of the neon bulb and assigned wavelengths in nm therefrom), then we can plot the intensity of the light at that wavelength by merely reading the intensity of the light at that pixel position (realistically, you'd probably average the entirety of the pixels corresponding to that position, then use that average intensity).

Therefore, we graph the average intensity of every pixel (from a 2-300px high image/2-300) along the bottom axis (assigning every pixel a wavenumber in nm) with the intensity thereof being plotted as pixel position/wavenumber v intensity.

As the nearest equivalent hardware, which does not include self-alignment every time (nor the use of a reference and analyte every shot), is several thousand US$, this could make for quite an interesting amateur project (especially given they use essentially the same hardware and the same way too).

PS That Python Image Library looks fucking interesting, I'll browse through that, nice one ;) I wonder if I should build the software/desktop application on XULRunner? It would allow for real time comparison with the various free-access Raman spectral libraries...

EDIT

I was thinking about it, once we get the image from TWAIN (which we can now do with Java using a workaround), we can then play with it...

Say for example we are using a 532nm laser and a 550nm edge filter, that means that only the remainder of the visible spectrum is likely to be of any interest whatsoever...

Now, let us also say we are using a 480x640 px (300Kpx CMOS/CCD) camera module (which, being outdated are cheap as shit).

Now, we want to split the beam (once we narrow it - I've gone into that elsewhere), to give us a Raman Spectrum of both reference sample (solvent only) and the Analytical Sample (Solvent + analyte). We also want to calibrate the spectrometer and a neon bulb uses fuck all power and costs a couple of bucks.

Ok, that being so, we'll need some orange glass (alternative edge-filter for >550nm is orange glass from Schott), so as to remove reflected 532nm light and Rayleigh scattered light, the anti-stokes light is downshifted, so only the upshifted Stokes-shifted light will be acquired, it is apparently best to do this from perpendicular to the beam, so put a mirror on the other side so as to improve the amount of Stokes-shifted light collected through the >550nm edge filter. Pass that light into optical fiber (cheap as chips in short lengths, it is available on auction sites), then into the slitless spectrometer.

Now, the spectrometer in question is going to be the tricky bit - it will have one grating, three light sources (the Stokes-scattered light from both the reference and analytical sample and also the calibrating light from the neon beam). Each of these will enter the spectrometer at different points and the grating will pass through separating walls, which extend all the way to the CCD/CMOS sensor, but which will allow us to collect 3-subimages on one image.

Now, given that the visible spectrum goes from about 400nm to approximately 800nm, and we have nothing under 550nm, then I suggest we utilize the long side of the sensor (640px x 480px) to collect the spectrum, limiting them to 500px (nb.e. 500px/250nm=0.5nm resolution, which will be shithot when it is made to work). As the grating is the same for all three light sources, and the light source for both the reference & analytical samples are the same laser, then everything is directly comparable.

First off, in order to work with the image, we'll have to identify where our sub-pictures are, then pass that information on to the program (JAVA - easier to build & distribute, ie. it is free whereas other programs are not, plus there are a lot more tutorials on doing this in JAVA than in most languages).

We then utilize that data, to instruct the program to iterate through the pixels of each sub-image, using an algorithm to determine the intensity/luminosity of each pixel and to keep that in memory for each column (from y1:y200 and x1:x500), then we can do something cool and average out the luminosity/intensity of x1-x500 by adding together the values of column y1:y200 and dividing by 200. That will reduce the effects of stray pixels and noise dramatically.

We keep that variable[] array in memory for all 3 sub-images. Then we look at the intensities of the calibration source (the neon light) and in accordance with the known spectral peaks thereof, assign absolute wavelengths to the x-axis of all 3 images (not quite linear, but close enough - if one were interested enough, it could be worked out mathematically).

We could then draw a graph, of average luminosity/intensity (y) v wavelength (essentially linear progression) based upon the results of having done so for all 3 sub-pictures, in addition to, having subtracted the relevant values (x,y) of the solvent from the analytical spectrum, we would have a good Raman Spectra of the analyte itself.

Provided we can access TWAIN and it allows us to process the images from the spectrometer without having first saved them, then opened them and had to go through a shitload of extra (manual) manipulation, this "COULD" work... In fact, it should work - the algorithm above is the way RGB pictures are converted to 8-bit grayscale, the use of CCD/CMOS sensors for this job is VERY OLD NEWS and provided we can get the resolution worked out, it would be one hell of an analytical instrument (about the size of an iphone). Even better, it may be possible to power the laser, the neon light and the CCD/CMOS camera from a USB 2.0 port, which would make it hell portable.
« Last Edit: March 20, 2010, 03:05:44 PM by no1uno »
"...     "A little learning is a dang'rous thing;
    Drink deep, or taste not the Pierian spring:
    There shallow draughts intoxicate the brain,
    And drinking largely sobers us again.
..."

Sedit

  • Global Moderator
  • Foundress Queen
  • *****
  • Posts: 2,099
Re: I need someone who can decompile programs
« Reply #3 on: March 22, 2010, 02:50:43 PM »
Decompiling a program is easy and there are many avalible programs to do so for you. However making sense of the ASM language that it spits out is another matter altogether. At one point I could have done this with ease but I can bet for sure that computers have advanced so far since my tinkering that the old code im use to.
There once were some bees and you took all there stuff!
You pissed off the wasp now enough is enough!!!

timecube

  • Subordinate Wasp
  • ***
  • Posts: 230
Re: I need someone who can decompile programs
« Reply #4 on: March 23, 2010, 03:55:21 PM »
Ahh, so that's what you were getting at.  My PM was much less than useful then haha.  I have trouble believing there aren't some kind of open source programs you can grab bits and pieces from.

A wrapper library for using TWAIN in C/C++
http://www.codeguru.com/cpp/g-m/multimedia/twain/article.php/c1585

The bitmap format is relatively simple.  It's just a header then an array of pixel values so they're fairly easy to read and manipulate, and there are a number of libraries for working with them.

Converting to greyscale might not give as good of results as writing your own intensity determining routine based on the raw data.  Then you'd maybe have more options for calibrating it better based on the color of light you are using, but I'm sure there are some grayscale routines floating around too if you want to just stick with that.

I hope I understand what you're trying to do properly, I can track probably track down some more libraries/source if you need it.  What OS are you doing this in?

no1uno

  • Global Moderator
  • Foundress Queen
  • *****
  • Posts: 681
Re: I need someone who can decompile programs
« Reply #5 on: April 03, 2010, 04:27:38 AM »
I'd honestly prefer C/C++, it seems to be more dependable, I can also utilize the simplicity of XULRunner inside it too... :D

Yeah well, sheet style, here's what has to be worked out

(1) TWAIN Capture image

(2) C/C++ Split image into the 3 relevant parts/or simply convert those parts to grayscale (we could always just use the x,y position to iterate through the columns therein)

(3) Use a filter - find the median of each column, every pixel that <0.75 or >2 time the median, we'll replace with the mode, then take the mean/average of the entire column. That effectively gives us a compensated average of what, 200 spectra (with each row effectively being a separate spectrum), with noise reduction built in?

(4) Work backwards from the spectrum of the neon bulb to determine the angle at which the known wavelengths were refracted/diffracted, then using that work out the algorithm and assign every x-pixel a wavelength (or part thereof).

(5) Compensate for the known transmission features of the OG-550 Schott glass (optional - ie. checkbox), by converting all values to represent what they would be if the transmission were 100% across the entire transmission range (which we can do, knowing the wavelength of each pixel)

(5) Convert the array dataset to percentages (ie. value/100) and graph X v Y (wavelength v intensity). Of course, in this instance, we are only really interested in how far the spectral lines are from the incident light (532nm)...

PS I really have to finish some ongoing projects (some are even legitimate study :D), once I do so, I will try out whether I can narrow a diode-laser beam down sufficiently using an array of transmission gratings (set them at the computed angle that the only wavelength to come out straight will be the 532nm, set one on an angle vertically, one horizontally, then another vertically, use black fabric (as suggested by not_important @SM) to absorb the stray wavelengths, then put it straight into the spectrometer along with the neon light... It should be possible to determine (within .5nm) the width of the beam.
« Last Edit: April 03, 2010, 04:35:36 AM by no1uno »
"...     "A little learning is a dang'rous thing;
    Drink deep, or taste not the Pierian spring:
    There shallow draughts intoxicate the brain,
    And drinking largely sobers us again.
..."