CoreRec

CoreRec

CoreRec excels in node recommendations, model training, and graph visualizations, making it the ultimate tool for data scientists and researchers.

CoreRecommendation Engine

CoreRec offers a robust recommendation system based on graph analysis. It can recommend similar nodes within a graph, aiding in various applications such as personalized recommendations in social networks or product recommendations in e-commerce platforms.

Advanced Graph Analysis

CoreRec provides cutting-edge tools for analyzing complex graph structures, making it ideal for data scientists and researchers.

Node Recommendation Engine

Utilize CoreRec's powerful engine to recommend similar nodes within a graph, enhancing user experience and engagement.

Customizable Transformer Model

Define and train Transformer models tailored to your graph data with customizable parameters for optimal performance.

PyTorch Dataset Integration

Seamlessly integrate graph data with PyTorch datasets, streamlining the model training process.

Flexible Model Training

Train your models with ease using CoreRec's flexible training functions, supporting various configurations.

Accurate Recommendation Metrics

Measure the accuracy of your recommendations with robust metrics provided by CoreRec.

2D Graph Visualizations

Create stunning 2D visualizations of your graphs, making data analysis more intuitive and insightful.

3D Graph Visualizations

Experience your graphs in 3D with customizable features, providing a deeper understanding of complex networks.

Features

Get Started

Basic Usage

Learn how to load data and build user interactions with CoreRec.


import corerec as cr

def load_data():
    users = cr.load_users('data/users.dat')
    ratings = cr.load_ratings('data/ratings.dat')
    movies = cr.load_movies('data/movies.dat')
    return users, ratings, movies

def build_interactions(ratings):
    return cr.build_user_interactions(ratings)

if __name__ == "__main__":
    users, ratings, movies = load_data()
    user_interactions = build_interactions(ratings)
    print(user_interactions)
                        

Context-Aware Recommendations

Implement context-aware recommendations using CoreRec's recommendation engines.


import corerec as cr
import os

def main():
    data_path = 'data/'  # Update with your actual data path
    context_config_path = os.path.join(data_path, 'context_config.json')
    
    users, ratings, movies = cr.load_data(data_path)
    user_interactions = cr.build_user_interactions(ratings)
    item_features = cr.build_item_features(movies)
    
    context_recommender = cr.CON_CONTEXT_AWARE(
        context_config_path=context_config_path,
        item_features=item_features
    )
    context_recommender.fit(user_interactions)
    
    recommendations = context_recommender.recommend(
        user_id=1, 
        context={'time_of_day': 'evening', 'location': 'home'}, 
        top_n=10
    )
    print(recommendations)

if __name__ == "__main__":
    main()
                        

CNN-Based Recommender

Implement a CNN-based recommendation system with CoreRec.


import torch
from corerec.cr_utility.dataloader import DataLoader as CRDataLoader
from corerec.engines.contentFilterEngine.nn_based_algorithms import NN__CNN

class MoviesDataset(Dataset):
    def __init__(self, file_path):
        # Initialize dataset
        pass

    def __len__(self):
        return len(self.movies)

    def __getitem__(self, idx):
        # Return genre_vector and title
        pass

def train_model(model, data_loader, criterion, optimizer, num_epochs, device):
    model.train()
    for epoch in range(num_epochs):
        total_loss = 0
        for genre_vector, titles in data_loader:
            genre_vector = genre_vector.to(device)
            optimizer.zero_grad()
            outputs = model(genre_vector)
            loss = criterion(outputs, genre_vector)
            loss.backward()
            optimizer.step()
            total_loss += loss.item()
        avg_loss = total_loss / len(data_loader)
        print(f"Epoch {epoch+1}/{num_epochs}, Loss: {avg_loss:.4f}")

if __name__ == "__main__":
    dataset = MoviesDataset(file_path='data/movies.dat')
    dataloader = CRDataLoader(dataset, batch_size=32, shuffle=True)

    model = NN__CNN(input_dim=20, num_classes=5)  # Example dimensions
    criterion = torch.nn.BCELoss()
    optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

    train_model(model, dataloader, criterion, optimizer, num_epochs=10, device='cpu')
                        

Transformer and RNN Recommendation Models

Train and evaluate Transformer and RNN models for recommendations using CoreRec.


import torch
from torch.utils.data import Dataset
from corerec.cr_utility.dataloader import DataLoader 
from corerec.engines.contentFilterEngine.nn_based_algorithms import (
    NN__TransformerModel, NN__RNNModel
)

class MoviesDataset(Dataset):
    def __init__(self, file_path):
        self.movies = []
        self.genre_to_idx = {}
        self.idx_to_genre = []
        self._load_data(file_path)

    def _load_data(self, file_path):
        with open(file_path, 'r', encoding='latin1') as f:
            for line in f:
                parts = line.strip().split('::')
                if len(parts) < 3:
                    continue
                movie_id, title, genres = parts
                genre_list = genres.split('|')
                self.movies.append((title, genre_list))
                for genre in genre_list:
                    if genre not in self.genre_to_idx:
                        self.genre_to_idx[genre] = len(self.idx_to_genre)
                        self.idx_to_genre.append(genre)

    def __len__(self):
        return len(self.movies)

    def __getitem__(self, idx):
        genre_vector = torch.zeros(len(self.genre_to_idx))
        for genre in self.movies[idx][1]:
            genre_vector[self.genre_to_idx[genre]] = 1
        return genre_vector, self.movies[idx][0]

def train_model(model, data_loader, criterion, optimizer, num_epochs, device):
    model.train()
    for epoch in range(num_epochs):
        total_loss = 0
        for inputs, labels in data_loader:
            inputs = inputs.to(device)
            optimizer.zero_grad()
            outputs = model(inputs)
            loss = criterion(outputs, inputs)
            loss.backward()
            optimizer.step()
            total_loss += loss.item()
        avg_loss = total_loss / len(data_loader)
        print(f"Epoch {epoch+1}/{num_epochs}, Loss: {avg_loss:.4f}")

if __name__ == "__main__":
    dataset = MoviesDataset(file_path='data/movies.dat')
    dataloader = DataLoader(dataset, batch_size=32, shuffle=True)

    transformer = NN__TransformerModel(input_dim=500, ...)
    rnn = NN__RNNModel(input_dim=500, ...)

    criterion = torch.nn.BCELoss()
    optimizer_t = torch.optim.Adam(transformer.parameters(), lr=0.001)
    optimizer_r = torch.optim.Adam(rnn.parameters(), lr=0.001)

    # Train Transformer
    train_model(transformer, dataloader, criterion, optimizer_t, num_epochs=5, device='cpu')

    # Train RNN
    train_model(rnn, dataloader, criterion, optimizer_r, num_epochs=5, device='cpu')

import corerec as cr
# Additional advanced usage examples
# Placeholder for more complex scenarios and integrations
def advanced_feature():
    pass

if __name__ == "__main__":
    advanced_feature()
                        

License

CoreRec and VishGraphs are completely open source projects. We highly support and encourage open source contributions. You are free to use, modify, and distribute the code as long as you adhere to the terms of the open source license.

[Tip for developers]: If you find this project useful, consider contributing back to the community by submitting bug fixes, feature enhancements, or documentation improvements.

If you want to support the development of these open source projects, you can star the repository on GitHub. Thank you for your support!

Contact

Discover the power of graph analysis and recommendation with CoreRec & VishGraphs. Dive into our comprehensive manual and explore the endless possibilities.
Feel free to get in touch if you have any questions or suggestions.

Profile Image

Wanna Contribute?

We welcome contributions to enhance the functionalities of our graph analysis and recommendation tools. Here are a few ways you can help:

  • Bug Fixes: Identify and fix bugs in the existing code.
  • Feature Enhancements: Suggest and implement improvements to current features.
  • New Features: Propose and develop new features that could benefit users of the libraries.
  • Documentation: Help improve the documentation to make the libraries more user-friendly.

To contribute, please follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or fix.
  3. Develop your changes while adhering to the coding standards and guidelines.
  4. Submit a pull request with a clear description of the changes and any relevant issue numbers.

Your contributions are greatly appreciated and will help make these tools more effective and accessible to everyone!

Vishesh Yadav
Project Maintainer

Get Connected