Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'DGLGraph' object has no attribute '_use_graphbolt' #7868

Open
dangnha opened this issue Feb 20, 2025 · 0 comments
Open

AttributeError: 'DGLGraph' object has no attribute '_use_graphbolt' #7868

dangnha opened this issue Feb 20, 2025 · 0 comments

Comments

@dangnha
Copy link

dangnha commented Feb 20, 2025

🐛 Bug

I got the bug:

Creating minibatch pretraining dataloader...
Traceback (most recent call last):
  File "/home/s12gb-2/miniconda3/envs/graph_newset/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/s12gb-2/miniconda3/envs/graph_newset/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/s12gb-2/DangNha/reset_TXGNN/TxGNN/reproduce/train.py", line 50, in <module>
    TxGNN.pretrain(n_epoch = 1, 
  File "/home/s12gb-2/DangNha/reset_TXGNN/TxGNN/txgnn/TxGNN.py", line 155, in pretrain
    dataloader = dist.DistEdgeDataLoader(
  File "/home/s12gb-2/miniconda3/envs/graph_newset/lib/python3.10/site-packages/dgl/distributed/dist_dataloader.py", line 863, in __init__
    self.collator = EdgeCollator(g, eids, graph_sampler, **collator_kwargs)
  File "/home/s12gb-2/miniconda3/envs/graph_newset/lib/python3.10/site-packages/dgl/distributed/dist_dataloader.py", line 634, in __init__
    Collator.add_edge_attribute_to_graph(self.g, self.graph_sampler.prob)
  File "/home/s12gb-2/miniconda3/envs/graph_newset/lib/python3.10/site-packages/dgl/distributed/dist_dataloader.py", line 325, in add_edge_attribute_to_graph
    if g._use_graphbolt and data_name:
AttributeError: 'DGLGraph' object has no attribute '_use_graphbolt'

When i try to create a DistEdgeDataLoader for training. My code:

import dgl
import dgl.distributed as dist
dataloader = dist.DistEdgeDataLoader(
            self.G,                           # Graph (should be a distributed DGLGraph)
            train_eid_dict,                   # Edge IDs (dict for heterogeneous graphs)
            sampler,                          # Graph sampler (e.g., NeighborSampler)
            negative_sampler=Minibatch_NegSampler(self.G, 1, 'fix_dst'),  # For link prediction
            batch_size=batch_size,            # Batch size
            shuffle=True,                     # Shuffle edges during sampling
            drop_last=False,                  # Keep incomplete batches
            # exclude='reverse_types',        # Optional: for reverse edge exclusion
            # reverse_etypes=reverse_etypes,  # Optional: reverse edge types
            num_workers=0                     # Avoid multiprocessing issues (for debugging)
        )

My Environment

  • DGL Version (e.g., 1.g.,0): dgl==2.4.0+cu121
  • Backend Library & Version: Pytorch==2.4.0
  • OS (e.g., Linux): Ubuntu
  • How you installed DGL (conda, pip, source): pip install dgl==2.4.1 -f https://data.dgl.ai/wheels/torch-2.4/cu121/repo.html
  • Python version: 3.10.16
  • CUDA/cuDNN version (if applicable): 12.1

Question

This bug is come from dgl version right, how can i solve this bug or i have to wait the new update from dgl?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant