This project focuses on refining the capabilities of LLama 2, a large language model (LLM), specifically the 13B parameter version. It aims to enhance LLama 2’s ability to generate typosquat domain names. The project utilizes a self-supervised fine-tuning approach with Transformer Reinforcement Learning on a dataset containing typosquat domains.

meta llama 2

Completion Date: Feb 2024 | Tools: Torch, HuggingFace, LLama 2 13B LLM, Accelerate , TRL, Link

Goal:

  • Improve LLama 2 13B Chat LLM’s ability to generate creative and grammatically correct typosquat domains. Typosquat domains are intentionally misspelled versions of legitimate domains used for malicious purposes.

Methodology:

  • Self-Supervised Fine-tuning: Unlike supervised learning which requires labeled data, this project employs self-supervised fine-tuning. Here, the LLM learns by interacting with the data itself (the typosquat domain dataset) and identifying patterns.
  • Transformer Reinforcement Learning: The project leverages Transformer Reinforcement Learning, a technique that rewards the LLM for generating typosquats that are similar to real domains but with intentional mistakes. This approach reinforces the desired behavior in the LLM, leading it to become more adept at generating these specific types of domains.

Technical Stack:

  • Hardware: The project utilizes the computational power of an AWS g5.12xlarge server to handle the intensive training processes required for fine-tuning a large language model.
  • Software:
    • Hugging Face: A popular library for working with LLMs, likely used to access and manage the LLama 2 LLM.
    • Accelerate: A library designed to accelerate deep learning workloads, potentially used to optimize training on the AWS server.
    • Torch: A deep learning framework likely used to implement the Transformer Reinforcement Learning algorithm.
    • Trl: Potentially a custom library or toolkit specific to the project’s requirements, although its exact function is unclear without further context.

Scroll to Top