Usage

The public API.

page_rank(*, adj=None, edge_index=None, num_nodes=None, add_identity=False, max_iter=1000, alpha=0.05, epsilon=0.0001, x0=None, use_tqdm=False, device=None)[source]

Compute page rank by power iteration.

Parameters
Return type

Tensor

Returns

shape: (n,) or (batch_size, n) the page-rank vector, i.e., a score between 0 and 1 for each node.

personalized_page_rank(*, adj=None, edge_index=None, add_identity=False, num_nodes=None, indices=None, device=None, batch_size=None, **kwargs)[source]

Personalized Page-Rank (PPR) computation.

Note

this method supports automatic memory optimization / batch size selection using torch_max_mem.

Parameters
Return type

Tensor

Returns

shape: (k, n) the PPR vectors for each node index

The following shows an example where a custom adjacency matrix is provided. For illustrative purposes, we randomly generate one:

>>> import torch
>>> adj = (torch.rand(300, 300)*10).round().to_sparse()

Next, we need to ensure that the matrix is row-normalized, i.e., the individual rows sum to 1. Here, we re-use a utility method provided by the library:

>>> from torch_ppr.utils import sparse_normalize
>>> adj_normalized = sparse_normalize(adj, dim=0)

Finally, we can use this matrix to calculate the personalized page rank for some nodes

>>> from torch_ppr import personalized_page_rank
>>> indices = torch.as_tensor([1, 2], dtype=torch.long)
>>> ppr = personalized_page_rank(adj=adj_normalized, indices=indices)