admin管理员组

文章数量:1122846

I am very new to Docker and I am trying to dockerize my Python server which can automatically pull llama3.1 by following this post
how to pull model automatically with container creation?

Here is my Dockerfile:

# Use the official Python image as the base image
FROM python:3.11

# Set environment variables
ENV PYTHONUNBUFFERED=1

# Install system dependencies
RUN apt-get update && apt-get install -y \
    build-essential \
    libgl1-mesa-glx \
    libglib2.0-0 \
    libsm6 \
    libxext6 \
    libxrender-dev \
    curl \
    && apt-get clean

# Set the working directory
WORKDIR /app

# Copy the project files
COPY . .

# Install Python dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt

# Expose the application port
EXPOSE 8000

# Run the app using uvicorn
CMD ["uvicorn", "PythonServer:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]

compose.yaml

version: "3.8"

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    pull_policy: always
    tty: true
    restart: always
    entrypoint: ["/usr/bin/bash", "/entrypoint.sh"]
    volumes:
      - ./ollama/ollama:/root/.ollama
      - ./entrypoint.sh:/entrypoint.sh  

  chroma:
    image: chromadb/chroma:latest
    container_name: chromadb
    volumes:
      - chromadb-data:/data  # Persistent storage for ChromaDB

  python-app:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: python-app
    ports:
      - "8000:8000"
    volumes:
      - .:/app  # Mount the current directory for live updates
      - /app/__pycache__  # Ignore cache files
    depends_on:
      - chroma
      - ollama

volumes:
  chromadb-data:

and entrypoint.sh

#!/bin/bash

# Start Ollama in the background.
/bin/ollama serve &
# Record Process ID.
pid=$!

# Pause for Ollama to start.
sleep 5

echo "

本文标签: dockerPulling llama31 for Python applicationStack Overflow