辅导案例-CS463/516
CS463/516 Medical Imaging Final project. Due Friday August 15th at 11:55 pm *there will be no extensions, as my deadline to submit final marks is August 18 and I need at least 3 days to mark* FINAL PROJECT may be done in groups of up to 4 (four) people. Groups larger than 4 are not permitted. Submission format: one partner should submit a .zip file to moodle with: 1) pdf containing images and descriptions showing you have completed all parts of the assignment/bonuses a. should also include most important code segments in pdf, so I can easily evaluate your algorithms 2) full python source code in .py format 3) A README file describing any incomplete parts of the assignment and your group member names. Final Project - Visual cortex decoding (mind reading) Acquiring BOLD fMRI images from the brain of a participant who is viewing images or watching a movie sets up an interesting machine learning problem, as follows: Given the brain activity (BOLD fMRI signal in visual cortex) and a set of images displayed to the subject, train a model that can reconstruct what a person is seeing for some unknown images, based on the BOLD signal in their visual cortex. Example (Figure 1): Article describing dataset: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006633 read materials and methods to understand dataset Dataset: from Openneuro: Deep Image Reconstruction https://openneuro.org/datasets/ds001506/versions/1.3.1 Get natural images here: (Examples below) https://drive.google.com/file/d/1RqM4qSh4L6YKrNOjdCYJJy8Aabe6n5IQ/view?usp=sharing You may use only a single subject to reduce the data size (But feel free to extend it to all subjects) In the natural image presentation experiments, the stimulus images are named as 'n03626115_19498', where 'n03626115' is ImageNet/WorNet ID for a synset (category) and '19498' is image ID. You can download the geometric shape images and alphabetical shape images at openneuro, the natural images are only available through the link provided above. All information needed to extract events is available at openneuro in the *.tsv files. Step 1: You will want to spatially and temporally normalize all BOLD runs before you start. You should use flirt to register all the individual runs to a single run (can pick any run), this will put all runs in the same space, which is necessary for machine learning to work correctly. You’ll probably want to use excerpts from the preprocessing pipeline (but I will provide more videos on this). Each voxel should then be normalized to mean zero and standard deviation 1. You may also consider bandpass filtering the image. you should try various pre-processing strategies, to see what works best. Step 2: After registering all runs, set up your machine learning problem by extracting ‘samples’ from the BOLD using the *.tsv stimulus files accompanying each run. Each sample in the training set is a full-brain BOLD signal at a certain time point (the time point 4 seconds after onset of the image). If you want to use convolutional neural networks, you can flatten the BOLD samples into a 2d representation, or keep the original 3d representation, otherwise each sample will just be a 1d vector of size n_voxels (where n_voxels is the number of voxels in the brain). There are many techniques to flatten the brain, you may consider using the theta, phi components of a spherical transform over the x,y,z coordinates (called the ‘pancake transform’), or you can use freesurfer’s method for inflating and flattening the cortex: https://surfer.nmr.mgh.harvard.edu/fswiki/FreeSurferOccipitalFlattenedPatch Link to paper describing pancake transform: https://drive.google.com/file/d/1Rex1ktzMiVCBOSN5BTh5EmQDM4Byp_eY/view?usp=sharing Step 3: after extracting samples (flattened, or otherwise) using the stimulus file, create a model to predict images based on BOLD samples. The model should predict the raw images, but you can also predict features of the image or some low- dimensional representation of the images. Results: show the following: (your grade is based on this, but you must also submit all code so I can verify) Figure 1 (20%): show raw BOLD image of two co-registered runs, and the signal from the same occipital lobe voxel in both images (to show that the images have been spatially normalized correctly). Figure 2 (10%): create a small figure showing your model, if it is a neural network draw a diagram, if something else, explain the parameters, describe the model, why you chose it, etc. Figure 3 (35%): natural image reconstruction (see below). Figure 4 (35%): artificial image reconstruction (see below). Figure 1: (example) Figure 3: Figure 4 In figure 4, you should show bar charts representing the average pearson correlation between the reconstructed images and the ground truth images, based on the test set. This article may be of interest (figures above were taken from here): https://www.frontiersin.org/articles/10.3389/fncom.2019.00021/full We have yet to cover machine learning or deep learning in this course. From now until the end of the course, I will be releasing lectures relating to machine learning and this final project. However, you have complete freedom in the methods you choose to meet the challenges of this project. Bonus (+15%) – there are many other datasets on openneuro with BOLD data during visual experiments. If you manage to successfully decode some images as above, download some of these other datasets and apply your model. Show the images that it produces. Do you see anything that makes sense (e.g. appears like a natural image)? If the problem is too hard, and you can’t get any reasonable looking natural images, you can also try to predict only some features about the image (contrast, or color, for example). This will gain you partial marks.