Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the calculation of the LGN response #668

Open
dancehours opened this issue May 2, 2017 · 8 comments
Open

the calculation of the LGN response #668

dancehours opened this issue May 2, 2017 · 8 comments

Comments

@dancehours
Copy link

Hi, I would like to ask about the calculation of responses of the LGN units. In the gcal model script,

on_weights = pattern.Composite(generators=[centerg,surroundg],operator=numpy.subtract)
off_weights = pattern.Composite(generators=[surroundg,centerg],operator=numpy.subtract)

the size is 256*256. Based on the equation (2) in the Stevens et al.(2013) of J.Neurosci, for the
response function of LGN units, the numerator part is about the sum of weights times image point
values. My problem is, in the model script, the radius of the connection field from Retina sheet to either LGN ON sheet or LGN OFF sheet, is 0.375, but I am not sure, how to choose the image points in
this range. Since the retina density is chosen as 24 in the script , the total units of the retina sheet
is 60 if I set the area as 1. This is to say, there are no more than 60 image pixels while 256 weights,
so how to find the counterpart weight of one image point to do the calculation ? Thanks for your attention.

@jbednar
Copy link
Member

jbednar commented May 2, 2017

I'm not sure precisely what you are asking, but if you run the script in the simulator you should be able to see the actual array sizes involved for all sheets and all weights, and hopefully that will clarify things. You might not be including the buffer area that gets added to the outside of each sheet to ensure that no connection is ever cropped off.

@dancehours
Copy link
Author

Thanks. I need to find out, e.g. the data of the connection field.

@dancehours
Copy link
Author

dancehours commented May 10, 2017

Sorry for disturbing , I have to clarify my question and would like to ask again, since I still confused by this calculation. According to the equation (2) in the Stevens et al.(2013) of J.Neurosci, for the response function of LGN units. The numerator part is about the sum of weights times stimuli values. In the script the weights are generated by the below :

on_weights = pattern.Composite(generators=[centerg,surroundg],operator=numpy.subtract)
off_weights = pattern.Composite(generators=[surroundg,centerg],operator=numpy.subtract)

I open the data and the size of weights are 256 * 256. However, the size of retina sheet is much smaller than 256 * 256 , which means the points for the stimulus are much less than 256*256, so it seems that there are not enough points for stimuli used to multiply the weights. Or I misunderstand something. So
could you help explain it ?

@jbednar
Copy link
Member

jbednar commented May 10, 2017

@jlstevens , can you please check your published simulation to report the sizes of the LGN, Retina, and the LGN receptive fields on the retina? In no case will the LGN RFs on the retina be larger than the retina itself.

@jlstevens
Copy link
Member

I'll have a look.

My suspicion is that the resolution of 256x256 is what you see when you look at the imagen pattern (i.e for viewing) as I am fairly sure you'll see very different numbers once the pattern is turned into weights for a connection field.

I don't believe we have any such nice (i.e power of 2) numbers for any of the CFs.

@dancehours
Copy link
Author

dancehours commented May 11, 2017

Hi, I am not very sure your meaning. The script I run is from /topographica/models/stevens.jn13/gcal.ty. The commands are:
centerg = pattern.Gaussian(size=0.07385,aspect_ratio=1.0,
output_fns=[transferfn.DivisiveNormalizeL1()])

surroundg = pattern.Gaussian(size=0.29540,aspect_ratio=1.0,
output_fns=[transferfn.DivisiveNormalizeL1()])

on_weights = pattern.Composite(generators=[centerg,surroundg],operator=numpy.subtract)

off_weights = pattern.Composite(generators=[surroundg,centerg],operator=numpy.subtract)

I run the script and open the data of these weights and the size of the weights is 256 * 256 .
Do you mean for the connection fields, only a part of these weights is used and approximately equal
to the entire weights ? How is the normalization factors used in the actual weights? Are they same
with the ones for the entire weights ?

@jbednar
Copy link
Member

jbednar commented May 11, 2017

Ah, I see why you are confused! The weights and input patterns are created by ImaGen, which is a library that allows you to specify resolution-independent patterns like in a vector drawing program like Illustrator or Inkscape. The on_weights here are not 256 * 256 or any other array shape; they don't have any specific resolution at this stage. If you call them using on_weights() they will be rendered into a default-sized array of 256 * 256, but you could just as easliy call them with on_weights(xdensity=5,ydensity=1000), and you'd get a different array. The array size hasn't yet been chosen, at this point.

You need to look at the actual network in memory if you want to see the specific array of values that is used during simulation, using something like topo.sim.LGNOn.cfs[0,0] (not sure of exact syntax, and can't check easily, but should be something like that that you can find by tab completion). Or you can inspect the size of the weights visually in the GUI. But here you're just getting the abstract mathematical specification of the pattern, not the specific array being used.

@dancehours
Copy link
Author

Now I see it, thank you so much !!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants