RoostGPT - Guide
Installation
Headless:
Prerequisite for meta-llama2 model:
GPU instance Nvidia A100-80GB with Ubuntu 22.04
Install NVIDIA drivers and docker
#!/bin/bash sudo apt-get install linux-headers-$(uname -r) distribution=$(. /etc/os-release;echo $ID$VERSION_ID | sed -e 's/\.//g') wget https://developer.download.nvidia.com/compute/cuda/repos/$distribution/x86_64/cuda-keyring_1.0-1_all.deb sudo dpkg -i cuda-keyring_1.0-1_all.deb sudo apt-get update sudo apt-get -y install cuda-drivers sudo apt-get update sudo apt-get install -y nvidia-container-toolkit-base nvidia-ctk --version sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml curl https://get.docker.com | sh \ && sudo systemctl --now enable docker distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list sudo apt-get update sudo apt-get install -y nvidia-container-toolkit sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi
Run the following docker command on the above instance (this will be
LLAMA2-API-SERVER-IPADDRESS
)docker run --gpus all --network host --rm -d -p 5000:5000 zbio/roostai-llama2-13b:1.1.3
Â
With Roost UI:
Postgres/MySQL RDS
EKS with EFS (CSI driver)
Add Helm Repo and install this Helm chart
helm repo add roostai https://roost-io.github.io/helm/releases helm upgrade roostai roostai/roost --create-namespace --namespace roost-ai --install --values values.yaml
For Helm uninstall
More information about the Helm chart
Usage
CLI Usage:
Individual developers will run the below to generate tests in their systems
Environment file will look like this
Â
VSCode Extension Usage
RoostGPT also provides a VS Code extension for test generation directly from your VS Code Workspace, you can download the Extension from here.
After installing The VS Code extension you need to configure it for using it with the llama2 model. Go to extension settings and configure the extension from there.
Provide Roost Token.
Select the open-source option for the Generative AI Model.
Provide the open-source model endpoint in the following format:
http://<LLAMA2-API-SERVER-IPADDRESS>:5000/generate
.Set the rest of the values according to the test type and the code repo, more information about what values are required for which test can be found here.
Go to your workspace explorer, on right-click on a file or folder, you will see a Generate tests using RoostGPT option
Right-click the API-Spec file if you want to generate API or integration tests.
Any other file/folder for generating Unit tests or API tests from Source Code
Â
RoostGPT UI Usage
If RoostGPT UI is available, users can log in to the Roost URL.
Go to the RoostGPT tab to define and trigger test generation.
UI requires Bitbucket credentials with permissions to create PR, read repo
UI requires
LLAMA2-API-SERVER-IPADDRESS
Optionally, UI requires 3rd party credentials for integration with tools like
Jira
ELKS
Â