-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the calculation of the LGN response #668
Comments
I'm not sure precisely what you are asking, but if you run the script in the simulator you should be able to see the actual array sizes involved for all sheets and all weights, and hopefully that will clarify things. You might not be including the buffer area that gets added to the outside of each sheet to ensure that no connection is ever cropped off. |
Thanks. I need to find out, e.g. the data of the connection field. |
Sorry for disturbing , I have to clarify my question and would like to ask again, since I still confused by this calculation. According to the equation (2) in the Stevens et al.(2013) of J.Neurosci, for the response function of LGN units. The numerator part is about the sum of weights times stimuli values. In the script the weights are generated by the below : on_weights = pattern.Composite(generators=[centerg,surroundg],operator=numpy.subtract) I open the data and the size of weights are 256 * 256. However, the size of retina sheet is much smaller than 256 * 256 , which means the points for the stimulus are much less than 256*256, so it seems that there are not enough points for stimuli used to multiply the weights. Or I misunderstand something. So |
@jlstevens , can you please check your published simulation to report the sizes of the LGN, Retina, and the LGN receptive fields on the retina? In no case will the LGN RFs on the retina be larger than the retina itself. |
I'll have a look. My suspicion is that the resolution of 256x256 is what you see when you look at the imagen pattern (i.e for viewing) as I am fairly sure you'll see very different numbers once the pattern is turned into weights for a connection field. I don't believe we have any such nice (i.e power of 2) numbers for any of the CFs. |
Hi, I am not very sure your meaning. The script I run is from /topographica/models/stevens.jn13/gcal.ty. The commands are: surroundg = pattern.Gaussian(size=0.29540,aspect_ratio=1.0, on_weights = pattern.Composite(generators=[centerg,surroundg],operator=numpy.subtract) off_weights = pattern.Composite(generators=[surroundg,centerg],operator=numpy.subtract) I run the script and open the data of these weights and the size of the weights is 256 * 256 . |
Ah, I see why you are confused! The weights and input patterns are created by ImaGen, which is a library that allows you to specify resolution-independent patterns like in a vector drawing program like Illustrator or Inkscape. The You need to look at the actual network in memory if you want to see the specific array of values that is used during simulation, using something like |
Now I see it, thank you so much !! |
Hi, I would like to ask about the calculation of responses of the LGN units. In the gcal model script,
on_weights = pattern.Composite(generators=[centerg,surroundg],operator=numpy.subtract)
off_weights = pattern.Composite(generators=[surroundg,centerg],operator=numpy.subtract)
the size is 256*256. Based on the equation (2) in the Stevens et al.(2013) of J.Neurosci, for the
response function of LGN units, the numerator part is about the sum of weights times image point
values. My problem is, in the model script, the radius of the connection field from Retina sheet to either LGN ON sheet or LGN OFF sheet, is 0.375, but I am not sure, how to choose the image points in
this range. Since the retina density is chosen as 24 in the script , the total units of the retina sheet
is 60 if I set the area as 1. This is to say, there are no more than 60 image pixels while 256 weights,
so how to find the counterpart weight of one image point to do the calculation ? Thanks for your attention.
The text was updated successfully, but these errors were encountered: