You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi
The TIME_STEPS of my LSTM model is 10.The code for the explanation section is:
input.requires_grad_()
ig = IntegratedGradients(model)
attr, delta = ig.attribute(input,target=0, return_convergence_delta=True)
attr = attr.detach().numpy()
def visualize_importances(feature_names, importances, title="Average Feature Importances", plot=True, axis_title="Features"):
print(title)
for i in range(len(feature_names)):
print(feature_names[i], ": ", '%.3f'%(importances[i]))
x_pos = (np.arange(len(feature_names)))
if plot:
plt.figure(figsize=(12,6))
plt.bar(x_pos, importances, align='center')
plt.xticks(x_pos, feature_names, wrap=True)
plt.xlabel(axis_title)
plt.title(title)
visualize_importances(feature_names, np.mean(attr, axis=0))
For one problem, print(np.mean(attr, axis=0).shape) gets (10, 7). Where 10 is TIME_STEPS and 7 is the number of features. print(np.mean(attr, axis=0))
[[-6.14843489e-03 1.23128297e-02 0.00000000e+00 0.00000000e+00 -2.44566928e-03 9.71663677e-03 0.00000000e+00]
[1.01105891e-02 -1.33561019e-02 0.00000000e+00 0.00000000e+00 -1.08681414e-02 -8.36898667e-03 0.00000000e+00]
[-3.01603199e-03 2.83478058e-05 0.00000000e+00 0.00000000e+00 5.63220110e-03 3.27828258e-03 0.00000000e+00]
[6.62596261e-04 -4.36685487e-03 0.00000000e+00 0.00000000e+00 -5.13588440e-04 2.89923891e-06 0.00000000e+00]
[-1.46707092e-03 1.61141641e-03 0.00000000e+00 0.00000000e+00 -4.48725778e-03 4.45105709e-03 0.00000000e+00]
[-4.85951945e-04 2.61684348e-03 0.00000000e+00 0.00000000e+00 -8.63782548e-03 6.60564365e-03 0.00000000e+00]
[1.44520150e-03 -4.12249724e-03 0.00000000e+00 0.00000000e+00 -3.75068304e-03 6.29765388e-03 0.00000000e+00]
[2.51529449e-03-1.04613991e-02 0.00000000e+00 0.00000000e+00 -5.62792612e-03 8.99994289e-03 0.00000000e+00]
[-2.50925849e-02 3.11531167e-02 0.00000000e+00 0.00000000e+00 1.79945677e-02 5.28183730e-02 0.00000000e+00]
[5.15326855e-02 -5.28545470e-02 0.00000000e+00 0.00000000e+00 -7.92302065e-02 -1.16893978e-02 0.00000000e+00]]
The correct result is that each feature corresponds to one importance value, and now there are 10, so how do we do that
The text was updated successfully, but these errors were encountered:
Hi @qishubo, the correct attr shape for each target is meant to be [batch_size, time steps, num_features]. Because the importance can change across time. However, if you want only one importance value, you can aggregate along the time steps.
Hi
The TIME_STEPS of my LSTM model is 10.The code for the explanation section is:
input.requires_grad_()
ig = IntegratedGradients(model)
attr, delta = ig.attribute(input,target=0, return_convergence_delta=True)
attr = attr.detach().numpy()
def visualize_importances(feature_names, importances, title="Average Feature Importances", plot=True, axis_title="Features"):
print(title)
for i in range(len(feature_names)):
print(feature_names[i], ": ", '%.3f'%(importances[i]))
x_pos = (np.arange(len(feature_names)))
if plot:
plt.figure(figsize=(12,6))
plt.bar(x_pos, importances, align='center')
plt.xticks(x_pos, feature_names, wrap=True)
plt.xlabel(axis_title)
plt.title(title)
visualize_importances(feature_names, np.mean(attr, axis=0))
For one problem, print(np.mean(attr, axis=0).shape) gets (10, 7). Where 10 is TIME_STEPS and 7 is the number of features. print(np.mean(attr, axis=0))
[[-6.14843489e-03 1.23128297e-02 0.00000000e+00 0.00000000e+00 -2.44566928e-03 9.71663677e-03 0.00000000e+00]
[1.01105891e-02 -1.33561019e-02 0.00000000e+00 0.00000000e+00 -1.08681414e-02 -8.36898667e-03 0.00000000e+00]
[-3.01603199e-03 2.83478058e-05 0.00000000e+00 0.00000000e+00 5.63220110e-03 3.27828258e-03 0.00000000e+00]
[6.62596261e-04 -4.36685487e-03 0.00000000e+00 0.00000000e+00 -5.13588440e-04 2.89923891e-06 0.00000000e+00]
[-1.46707092e-03 1.61141641e-03 0.00000000e+00 0.00000000e+00 -4.48725778e-03 4.45105709e-03 0.00000000e+00]
[-4.85951945e-04 2.61684348e-03 0.00000000e+00 0.00000000e+00 -8.63782548e-03 6.60564365e-03 0.00000000e+00]
[1.44520150e-03 -4.12249724e-03 0.00000000e+00 0.00000000e+00 -3.75068304e-03 6.29765388e-03 0.00000000e+00]
[2.51529449e-03-1.04613991e-02 0.00000000e+00 0.00000000e+00 -5.62792612e-03 8.99994289e-03 0.00000000e+00]
[-2.50925849e-02 3.11531167e-02 0.00000000e+00 0.00000000e+00 1.79945677e-02 5.28183730e-02 0.00000000e+00]
[5.15326855e-02 -5.28545470e-02 0.00000000e+00 0.00000000e+00 -7.92302065e-02 -1.16893978e-02 0.00000000e+00]]
The correct result is that each feature corresponds to one importance value, and now there are 10, so how do we do that
The text was updated successfully, but these errors were encountered: