diff --git a/.env.example b/.env.example new file mode 100644 index 0000000..95c360f --- /dev/null +++ b/.env.example @@ -0,0 +1,17 @@ +POSTGRES_HOST=192.168.0.200 +POSTGRES_PORT=5432 +POSTGRES_DB=fbla +POSTGRES_USER=postgres +POSTGRES_PASSWORD=postgrespw +JWT_SECRET=CHANGE_ME +BASE_URL=https://fbla26.marinodev.com +EMAIL_HOST=marinodev.com +EMAIL_PORT=465 +EMAIL_USER=westuffind@marinodev.com +EMAIL_PASS=CHANGE_ME +FBLA26_PORT=8000 +BODY_SIZE_LIMIT=10MB +LLAMA_PORT=8001 +LLAMA_HOST=192.168.0.200 +LLAMA_MODEL=Qwen3VL-2B-Instruct-Q4_K_M.gguf +LLAMA_MMPROJ=mmproj-Qwen3VL-2B-Instruct-Q8_0.gguf diff --git a/README.md b/README.md index 75842c4..5e9fd6e 100644 --- a/README.md +++ b/README.md @@ -1,38 +1,93 @@ -# sv +# CareerConnect - FBLA 2025 -Everything you need to build a Svelte project, powered by [`sv`](https://github.com/sveltejs/cli). +## Overview -## Creating a project +This is a lost and found application built using [SvelteKit](https://kit.svelte.dev/) for the 2026 FBLA Website Coding & +Development event. It allows users to browse items, post found items, and manage them. The +application is designed for fast performance and a seamless user experience. -If you're seeing this, you've probably already done this step. Congrats! +## Features + +- User authentication (login/signup/logout) + - Email-only token-based methods for non-admins +- Browse/search items +- Post found items +- Inquire about items +- Claim items +- Email notifications +- Themes + +## Installation + +To set up the project locally, follow these steps: + +### Prerequisites + +- [Node.js](https://nodejs.org/) (LTS recommended) +- [npm](https://www.npmjs.com/) or [pnpm](https://pnpm.io/) + +### Clone the repository ```sh -# create a new project in the current directory -npx sv create - -# create a new project in my-app -npx sv create my-app +git clone https://git.marinodev.com/MarinoDev/FBLA25 +cd FBLA25 ``` -## Developing +Create a `.env` file in the root directory and configure environment variables. `.env.example` is provided as a +template. +Download a LLaMA compatible LLM (and mmproj) to `llm-models`. I +recommend [Qwen3-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen3-VL-2B-Instruct-GGUF). -Once you've created a project and installed dependencies with `npm install` (or `pnpm install` or `yarn`), start a development server: +### Docker + +A `Dockerfile` and `docker-compose.yml` file are provided for running the application in a Docker container. + +### Manual + +Using Docker is strongly recommended, as it bundles the database and the AI. + +#### Install dependencies + +```sh +npm install +``` + +#### Start the development server ```sh npm run dev - -# or start the server and open the app in a new browser tab -npm run dev -- --open ``` -## Building +Go to `http://localhost:5173/` (or the port shown in the terminal). -To create a production version of your app: +## Deployment + +To deploy the application, build it using: ```sh npm run build +node build ``` -You can preview the production build with `npm run preview`. +## Resources Used + +### Technologies + +- [SvelteKit](https://kit.svelte.dev/) +- [Tailwind CSS](https://tailwindcss.com/) +- [Shadcn (Svelte version)](https://www.shadcn-svelte.com) + +### Libraries + +- [dotenv](https://www.npmjs.com/package/dotenv) +- [bcrypt](https://www.npmjs.com/package/bcrypt) +- [desm](https://www.npmjs.com/package/desm) +- [nodemailer](https://www.npmjs.com/package/nodemailer) +- [jsonwebtoken](https://www.npmjs.com/package/jsonwebtoken) +- [postgres.js](https://www.npmjs.com/package/postgres) +- [lucide](https://www.npmjs.com/package/@lucide/svelte) +- [sharp](https://www.npmjs.com/package/sharp) +- [valibot](https://www.npmjs.com/package/valibot) + + -> To deploy your app, you may need to install an [adapter](https://svelte.dev/docs/kit/adapters) for your target environment. diff --git a/docker-compose.yml b/docker-compose.yml index 2dfc735..2dc5476 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -34,9 +34,9 @@ services: - ./llm-models:/models:ro command: - -m - - /models/Qwen3VL-2B-Instruct-Q4_K_M.gguf + - /models/${LLAMA_MODEL} - --mmproj - - /models/mmproj-Qwen3VL-2B-Instruct-Q8_0.gguf + - /models/${LLAMA_MMPROJ} - --host - 0.0.0.0 - --port