Ollama
Get up and running with large language models. Ollama is a client to use AI models. It is also possible to use DeepSeek models.
Usage
This example uses some ollama models and DeepSeek model. The scripts creates two prompts in spanish and download the models to set the prompts, after that the answers are saved in separeted files.
Available ollama versions:
- ollama 0.3.14
- ollama 0.5.7
Example script : test-ollama-hpc.sh
#!/bin/bash
#SBATCH -J ollama-gpu-test
#SBATCH -e ollama-test%j.err
#SBATCH -o ollama-test%j.msg
#SBATCH -p hopper # queue (partition)
#SBATCH --nodelist=agpuh02
#SBATCH --gres=gpu:4
# load ollama
#module load ollama/0.3.14
module load ollama/0.5.7
python --version
nvidia-smi
echo "Current path: $(pwd)"
# export variables
export BASE_OLLAMA_TEST=/fs/agustina/$(whoami)/test-ollama
export PROMPTS_PATH=$BASE_OLLAMA_TEST/prompts
export OLLAMA_BIN=$OLLAMA_ROOT/bin
echo "OLLAMA PATH: $OLLAMA_BIN"
export OLLAMA_NUMPARALLEL=4
export OLLAMA_LOAD_TIMEOUT=900
export OLLAMA_MODELS=$BASE_OLLAMA_TEST/models-ollama
# create folders for models and prompts
mkdir -p $PROMPTS_PATH
mkdir -p $OLLAMA_MODELS
# prompts creation
echo "Dime a que instituto de UNIZAR corresponden las siglas BiFi. Describe su investigacion reciente." > $PROMPTS_PATH/prompt1.txt
echo "cual es el camino mas interesante de sevilla a barcelona pasando por madrid?" > $PROMPTS_PATH/prompt2.txt
# start ollama
$OLLAMA_BIN/./ollama serve &
$OLLAMA_BIN/./ollama list
# 1 - Download models (also DeepSeek)
# 2 - Pass the prompts to the models
# 3 - Get the answers and save them to a file
for i in llama3.1:8b-instruct-q2_K \
llama3.1:8b-instruct-q8_0 \
deepseek-r1:7b; do
touch answer1-$i.txt && >answer1-$i.txt
echo "" >> answer1-$i.txt
echo "PROMPT:" >> answer1-$i.txt
echo "" >> answer1-$i.txt
cat $PROMPTS_PATH/prompt1.txt >> answer1-$i.txt
echo "" >> answer1-$i.txt
echo "ANSWER:" >> answer1-$i.txt
echo "" >> answer1-$i.txt
echo $(< $PROMPTS_PATH/prompt1.txt) | $OLLAMA_BIN/./ollama run $i >> answer1-$i.txt
echo "" >> answer1-$i.txt
echo "--------------------------------" >> answer1-$i.txt
echo "" >> answer1-$i.txt
echo "PROMPT:" >> answer1-$i.txt
echo "" >> answer1-$i.txt
cat $PROMPTS_PATH/prompt2.txt >> answer1-$i.txt
echo "" >> answer1-$i.txt
echo "ANSWER:" >> answer1-$i.txt
echo "" >> answer1-$i.txt
echo $(< $PROMPTS_PATH/prompt2.txt) | $OLLAMA_BIN/./ollama run $i >> answer1-$i.txt
done
$OLLAMA_BIN/./ollama list
echo "DONE!"
Submit with :
sbatch --account=your_project_ID test-ollama-hpc.sh
It is also possible to use other DeepSeek models like the one to create programming code like the Deepseek-v2:16b. We can request a Brainfuck programming language interpreter made in Python:
Example script : test-deepseek-hpc.sh
#!/bin/bash
#SBATCH -J deepseek-gpu-test
#SBATCH -e deepseek-test%j.err
#SBATCH -o deepseek-test%j.msg
#SBATCH -p hopper # queue (partition)
#SBATCH --nodelist=agpuh02
#SBATCH --gres=gpu:1
# load ollama
module load ollama/0.5.7
export BASE_OLLAMA_TEST=/fs/agustina/$(whoami)/deepseek-test
export ANSWERS_PATH=$BASE_OLLAMA_TEST/answers
export OLLAMA_BIN=$OLLAMA_ROOT/bin
echo "OLLAMA PATH: $OLLAMA_BIN"
export OLLAMA_NUMPARALLEL=1
export OLLAMA_LOAD_TIMEOUT=900
export OLLAMA_MODELS=$BASE_OLLAMA_TEST/models
mkdir -p $OLLAMA_MODELS
mkdir -p $ANSWERS_PATH
$OLLAMA_BIN/./ollama serve &
PROMPT="create a brainfuck interpreter made in python"
touch $ANSWERS_PATH/answer.txt
truncate -s 0 $ANSWERS_PATH/answer.txt
echo $PROMPT | $OLLAMA_BIN/./ollama run deepseek-coder-v2:16b >> $ANSWERS_PATH/answer.txt
Submit with :
sbatch --account=your_project_ID test-deepseek-hpc.sh
DeepSeek-v2:16b response:
Creating a Brainfuck interpreter in Python is an interesting exercise. Below, I'll provide a simple implementation of a Brainfuck interpreter
that can execute Brainfuck commands. This interpreter will handle the basic set of commands defined by Brainfuck, including moving pointers,
modifying values at the pointer location, and looping based on the value at the current cell.
Here's a basic implementation:
def brain_fuck(code):
tape = [0] * 30000 # Initialize a tape of 30,000 cells with all values set to 0
pointer = 0 # The current cell of the tape being pointed by the interpreter
code_pointer = 0 # Position in the Brainfuck code string
brackets_stack = [] # Stack for keeping track of bracket positions
while code_pointer < len(code):
command = code[code_pointer]
if command == '>':
pointer += 1
if pointer >= len(tape):
raise IndexError("Tape pointer out of bounds.")
elif command == '<':
pointer -= 1
if pointer < 0:
raise IndexError("Tape pointer out of bounds.")
elif command == '+':
tape[pointer] += 1
if tape[pointer] > 255:
tape[pointer] = 0
elif command == '-':
tape[pointer] -= 1
if tape[pointer] < 0:
tape[pointer] = 255
elif command == '.':
print(chr(tape[pointer]), end='')
elif command == ',':
tape[pointer] = ord(input()[0]) if input() else 0
elif command == '[':
if tape[pointer] == 0:
bracket_nesting = 1
while bracket_nesting > 0:
code_pointer += 1
if code[code_pointer] == '[':
bracket_nesting += 1
elif code[code_pointer] == ']':
bracket_nesting -= 1
else:
brackets_stack.append(code_pointer)
elif command == ']':
if tape[pointer] != 0:
code_pointer = brackets_stack[-1]
else:
brackets_stack.pop()
code_pointer += 1
# Example usage:
brain_fuck("++++++++++[>+>+++>+++++++>++++++++++<<<<-]>>>++.>+.+++++++..+++.<<++.>+++++++++++++++.>.+++.------.--------.")
This implementation covers the basic commands of Brainfuck and handles input/output as specified in the standard Brainfuck language.
It uses a simple stack to handle loops, jumping over the code when the condition is not met. The tape size is fixed at 30,000 cells,
which can be adjusted based on requirements.
Keep in mind that this implementation does not include error handling for malformed Brainfuck code or
runtime errors (like accessing out of bounds memory). You might want to add checks and exceptions to make the interpreter more
robust and user-friendly.
More info :