Docker llm. Supports transformers and word vectors.

Contribute to the Help Center

Submit translations, corrections, and suggestions on GitHub, or reach out on our Community forums.

Our developer hardware varied between Macbook Pros (M1 chip, our developer machines) and one Windows machine with a "Superbad" GPU running WSL2 and Docker on WSL. /build. enableAutoSuggest lets you choose to enable or disable "suggest-as-you-type" suggestions. vLLM supports most of popular open source LLM modes such as Llama2, Mistral, Falcon, etc. We are committed to continuously testing and validating new open-source models that emerge every day. Retrieved the model weights. It then copies the pre-trained LLM model to the /app directory and runs the app. All text-generation-webui extensions are included and supported (Chat, SuperBooga, Whisper, etc). js. Manual Setup via NebulaGraph Console : If the above options are not applicable, manually create the NebulaGraph space, tags, edges, and indexes using the NebulaGraph console. generateLLMInference_lambda. cpp and Ollama; Deployment: the demonstration of how to deploy Qwen for large-scale inference with frameworks like vLLM , TGI , etc. With OpenLLM, you can easily build a Bento for a specific model, like dolly-v2-3b, using the build command. Grab your LLM model: Choose your preferred model from the Ollama library (LaMDA, Jurassic-1 Jumbo, and more!). 0 license. The Mistral AI APIs empower LLM applications via: Text generation, enables streaming and provides the ability to display partial model results in real-time. Docker’s commitment to fostering innovation and collaboration means we’re excited to see the practical applications and solutions that will emerge from this ecosystem. You can uncomment depth, upscaler, inpainting and gfpgan from Dockerfile (first generated image) but it Ask your favourite LLM how to install and configure docker, docker-compose, and the Nvidia CUDA docker runtime for your platform! Docker Compose This is the recommended deployment method (it is the easiest and quickest way to manage folders and settings through updates and reinstalls). Step 4: Build the Docker image. OCR still sucks! Especially when you're from the other side of the world (and face a significant lack of training data in your language) — or just not thrilled with noisy results. docker build -t my-app . The Docker GenAI Stack lets teams easily integrate NVIDIA accelerated computing into their AI workflows. MLC LLM is a machine learning compiler and high-performance deployment engine for large language models. Dec 1, 2023 · To make sure that the steps are perfectly replicable for anyone, I bring you a guide with PrivateGPT & Docker to contain all the Dependencies (and make it work 100% of the times). Launch the Docker. 5 for profound philosophical conversations. Jul 5, 2023 · As our LLM, we use OpenAI’s chatGPT. g. For Docker Engine on Linux, install the NVIDIA Container Toolkilt. Deployed the model with Triton Inference Server. Introducing Docker Build Cloud: A new solution to speed up build times and improve developer productivity. 1 conda activate python311 # run fp16 Llama-2-7b models on a single GPU. In addition to LLM serving, it also includes a CLI and a web frontend (Aviary Explorer) that you can use to compare the outputs of different models directly, rank them by quality, get a cost and latency estimate, and more. It is designed to be a general-purpose agent that can be applied to a wide range of tasks. https Nov 13, 2023 · In AWS go to API Gateway, and choose REST-API. import requests import json def generate_poem(): """Generates a poem using an LLM. cpp server that easily exposes a list of local language models to choose from to run on your own computer. Sep 19, 2023 · Start Weaviate. The application demonstration is available on both Streamlit Public Cloud and Google App Engine. Terminal. Find out how to use Anyscale/ray-llm with Sep 8, 2023 · LLM output. yml file. Large Language Model Hosting Container. Send Requests. Run Open WebUI with Intel GPU. It enables you to achieve sub-millisecond communication latency between Ray actors and tasks, which is crucial for real-time applications. . Now that you have an application, you can use docker init to create the necessary Docker assets to containerize your application. sh from repo directory to build the container. text-generation-webui is always up-to-date with the latest code and Apr 10, 2023 · DJLServing is a high-performance model server that added support for AWS Inferentia2 in March 2023. TensorRT-LLM contains components to create Python and C++ runtimes that execute those TensorRT engines. i7 13700K; 3090 24GB; DDR5 128GB; dockerの実行環境を用意. Run Dify on Intel GPU. yml file with following code: Restart docker service: sudo systemctl restart docker; Check if docker container also sees your GPU: sudo docker run --rm --gpus all nvidia/cuda:12. Thanks to Ollama, we have a robust LLM Server that can be set up locally, even on a laptop. 8. Run the Model. jetson-containers run $(autotag mlc) # or explicitly specify one of the container images above. , “in the vast expanse of space, a majestic whale carries a container on its back”). Minimum Requirements. That’s where Autogpt comes in. While llama. Step 4: Build the Docker image Once you have created the Dockerfile, you can build the Docker image for your app. yaml file that contains all the experiment parameters. Autogpt is an innovative tool that utilizes LLM technology to research and learn the best methods for creating generative workspaces. Alpaca LLM inside a Docker Container This docker image is based on the Stanford 'Alpaca' [1] model, which is a fine-tuned version of Meta's 'LLaMa' [3] foundational large language model. Docker Compose will download and install Python 3. Embeddings, useful for RAG where it represents the meaning of text as a list of numbers. Contribute to awslabs/llm-hosting-container development by creating an account on GitHub. By simply spinning up one of the NVIDIA AI Enterprise Nov 6, 2023 · This DockerCon talk explores the fundamental concepts of graph, graph databases, and NebulaGraph. Latest llama. Build up to 39x faster with Docker Build Cloud. Inside the docker-genai-sample directory, run the docker init command. May 31, 2023 · We’re excited to announce the release of Aviary: a new open source project that simplifies and enables easy self-hosted serving of multiple LLM models efficiently. Add the Ollama service and a volume in your compose. 5B, 7B, and 72B, as well as corresponding quantized versions gptq-int4, gptq-int8, and awq-int4. Sep 18, 2023 · This Dockerfile tells Docker to use the latest version of the Serge image as the base image. DJL is also part of Rubikon support for Neuron that includes the integration between DJLServing and transformers-neuronx. 5-turbo" Find the appropriate "model" string for your language model here. py script when the image is run. The model has 132B total parameters and 36B active parameters. Run H2O LLM Studio with command line interface (CLI) You can also use H2O LLM Studio with the command line interface (CLI) and specify the configuration . Dec 20, 2023 · Install Docker: Download and install Docker Desktop for Windows and macOS, or Docker Engine for Linux. Enjoy the state-of-the art LLM serving throughput on your Windows Home PC with efficient paged attention, continuous batching and fast inferencing, Plus Qualitzation. , “set the image into space”). Jan 27, 2024 · Local-LLM. You can use this image to experiment with different LLMs, such as Mistral-7B or GGML, and generate multimodal and creative outputs. 3%. Next Steps. 2. Readme Activity AI/ML accelerated. com XAgent is an open-source experimental Large Language Model (LLM) driven autonomous agent that can automatically solve various tasks. The AWS Model Server team offers a container image that can help LLM/AIGC use cases. Explore the features and benefits of ollama/ollama on Docker Hub. py. 本記事では、LLMをチューニングするための開発環境についてまとめました。. It is designed to be as easy as possible to get started with running local models. Compile the Model into a TensorRT Engine. On the hub, you can find more than 140,000 LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance. Serving September 18th, 2023: Nomic Vulkan launches supporting local LLM inference on NVIDIA and AMD GPUs. On the installed Docker Desktop app, go to the search bar and type ollama (an optimized framework for loading models and running LLM inference). Docker build uses BuildKit, to turn a Dockerfile into a docker image, OCI image, or another image format. Create a new Resource. Though running the LLM through CLI is quick way to test the model, it is less than ideal for The all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities. Software Supply Chain. py is a python script that ingest and normalize EEG data in a csv file (train. 模型中立 :支持对接各种大 docker-compose. Create a Mar 18, 2024 · LLM-powered apps with Docker GenAI Stack. Web server. The -d option runs containers in detached mode. See full list on github. This guide provides general instructions for setting up the IPEX-LLM Docker containers with Intel GPU. Run PrivateGPT with IPEX-LLM on Intel GPU. Run Ollama with IPEX-LLM on Intel GPU. You can find all files on GitHub . Dec 18, 2023 · 2. It also supports various network transports and configurations. 🚀 Effortless Setup: Install seamlessly using Docker or Kubernetes (kubectl, Dec 1, 2023 · LLM Server: The most critical component of this app is the LLM server. , local PC with iGPU llm. Ollama enables you to build and run GenAI applications with minimal code and maximum performance. The best practice for self-cognition fine-tuning, inference and deployment of Qwen2-72B-Instruct using dual-card 80GiB A100 can be found here . Running Open Interpreter locally. - bicez/AI-anything-llm Option 1: Install NebulaGraph Docker Extension from Docker Hub. ai/en/latest Supporting multiple LLM backends out of the box, including vLLM and TensorRT-LLM. But whereas a compiler takes source code and libraries and produces an executable, BuildKit takes a Dockerfile and a file path and creates a container image. The Docker framework is also utilized in the process. MaxKB = Max Knowledge Base,是一款基于 LLM 大语言模型的开源知识库问答系统,旨在成为企业的最强大脑。. Portability: Docker containers encapsulate the GenAI service, its dependencies, and runtime environment, ensuring consistent behavior across different environments. 尤其是以chatglm、llama等平民玩家都能跑起来的较小规模的llm开源之后,业界涌现了非常多基于llm的二次微调或应用的案例。本项目旨在收集和梳理中文llm相关的开源模型、应用、数据集及教程等资料,目前收录的资源已达100+个! May 7, 2024 · Conversely, if GPU resources are limited or unavailable, you can deploy a container with a CPU-based LLM. Retrieve the Model Weights. Aug 19, 2023 · GPU-Accelerated LLM on ARM64in Docker! August 19, 2023 - docker machine-learning homelab. AI and ML are now part of many applications and add to the complexity of the development environment. NVIDIA provides pre-built and free Docker containers for a variety of AI workloads, including those that use the NeMo framework for LLMs. DBRX is a state-of-the-art open source LLM trained by Databricks Mosaic team. Running AnythingLLM on AWS/GCP/Azure? You should aim for at least 2GB of RAM. Aug 8, 2023 · Our new Snowpark Container Services feature enables you to run Docker containers inside Snowflake, including ones that are accelerated with NVIDIA GPUs. Docker removes repetitive, mundane configuration tasks and is used throughout the development lifecycle for fast, easy, and portable application development. csv) and train two models to classify the data (using scikit-learn). yml and that you are in the same folder as the Docker Compose file. We have released two DBRX models: Model. docker compose build docker compose run dalai npx dalai alpaca install 7B # or a different model docker compose up -d ai llama llm Resources. Since we want to serve our application, we use FastAPI to create endpoints for users to interact with our agent. The all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities. Run Coding Copilot in VSCode with Intel GPU. 11, Node Version Manager (NVM), and Node. VLMs on Laptop: Follow the instructions to deploy VLMs on NVIDIA Laptops with TinyChat. Supports transformers and word vectors. 🔍 Better text detection by combining multiple OCR engines with 🧠 LLM. XAgent is still in its early stages, and we are working hard to improve it. Shell 1. model = "gpt-3. Nov 6, 2023 · This Dockerfile tells Docker to use the latest version of the Serge image as the base image. Run Performance Benchmarking with IPEX-LLM. LLMチューニング用の環境構築に苦戦している方の助けになれば幸いです。. Building a Containerized LLM Chatbot Application. Jul 31, 2023 · This article delves into the various tools and technologies required for developing and deploying a chat app that is powered by LangChain, OpenAI API, and Streamlit. To start the container, you can use jetson-containers run and autotag, or manually put together a docker run command: # automatically pull or build a compatible container image. For a deeper dive into the available arguments, run:. We’re big fans of open source LLMs here at Anyscale. As part of our research on LLMs, we started working on a chatbot project using RAG, Ollama and Mistral. Once you have created the Dockerfile, you can build the Docker image for your app. yml provides services that build and run containers. Mar 23, 2023 · Effortlessly Build Machine Learning Apps with Hugging Face’s Docker Spaces. This has given the AI community many Apr 28, 2024 · Ollama 是一个开源的大型语言模型(LLM)服务工具,它允许用户在本地机器上运行和部署大型语言模型。 Ollama 设计为一个框架,旨在简化在 Docker 容器中部署和管理大型语言模型的过程,使得这一过程变得简单快捷。 Dec 16, 2023 · ExLlama, turbo-charged Llama GPTQ engine - performs 2x faster than AutoGPTQ (Llama 4bit GPTQs only) CUDA-accelerated GGML support, with support for all Runpod systems and GPUs. You can find a list of LLMs/models that are known to work well with MemGPT on the #model-chat channel on Discord , as well as on this spreadsheet . Oct 19, 2023 · TensorRT-LLM is an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. May 21, 2023 · To get Dalai up and running with a web interface, first, build the Docker Compose file: docker-compose build. An IPEX-LLM container is a pre-configured environment that includes all necessary dependencies for running LLMs on Intel GPUs. , see the full supported LLM list in https://docs. This makes the adoption process much faster which draws the attention of many developers to the project it will accelerates its development process. Run Locally: the instructions for running LLM locally on CPU and GPU, with frameworks like llama. 5%. 07: Support Qwen2 series LLM, including Base and Instruct models of 0. What Can You Do With TensorRT-LLM? Quick Start Guide. This stack, designed for seamless component integration, can be set up on a developer’s laptop using Docker Desktop for Windows. Apr 29, 2023 · Auto-GPT AI-Generated Dockerfile. This ensures a consistent runtime environment across different deployments. LightLLM harnesses the strengths of numerous well-regarded open-source implementations, including but not limited to FasterTransformer, TGI, vLLM, and FlashAttention. Set a resource name and make sure to check the CORS button for avoiding denied request. LoRAX (LoRA eXchange) is a framework that allows users to serve thousands of fine-tuned models on a single GPU, dramatically reducing the cost of serving without compromising on throughput or latency. YAML is structured data, so it's easy to modify and extend. env file and add your API keys: Step 3. Download the Ollama Docker image: One simple command (docker pull ollama/ollama) gives you access to the magic. An online platform for free expression and writing at will, enabling users to share their thoughts and ideas. The train. It uses the Mixture-of-Experts (MoE) architecture and was trained with optimized versions of Composer, LLM Foundry, and MegaBlocks. Nov 24, 2023 · この記事では、GPTQ量子化でLLMで動かしてみます。 私の PC のスペック. Run llama. For more examples, refer to: examples/ for showcases of how to run a quick benchmark on latest LLMs. When using MemGPT with open LLMs (such as those downloaded from HuggingFace), the performance of MemGPT will be highly dependent on the LLM's function calling ability. Set up a local development environment for Hugging Face with Docker. BentoCloud provides fully-managed infrastructure optimized for LLM inference with autoscaling, model orchestration, observability, and many more, allowing you to run any AI model in the cloud. Use the Dockerized version of AnythingLLM for a much faster and complete startup of AnythingLLM. It automatically handles downloading the model of your choice and configuring the server based on your CPU, RAM, and GPU. Feb 8, 2024 · An LLM model, Mistral-7B, will generate a richer textual description based on a user prompt (e. 2%. vllm. Prerequisites. Don't worry: check your bandwidth use to reassure Welcome to Part 1 of our series on "Containerizing LLM-Powered Apps"! Learn how to containerize your app using Docker, enabling seamless deployment across an 🔥2024. py, inference. Install vLLM Firstly you need to install vLLM (or use conda add vllm if you are using Anaconda): To run Ollama in a container and provide GPU access: Install the prerequisites. The Hugging Face Hub is a platform that enables collaborative open source machine learning (ML). Join us as we make AI/ML more accessible and straightforward for developers everywhere. Apr 3, 2024 · From the user interface you can set WORKSPACE, LLM_BASE_URL, LLM_MODEL, LLM_API_KEY and more. You can deploy this Docker image on any amd64 machine by following these steps: Step 1. cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, ModelScope, etc - yshashix/ipex-llm-docker-k8s Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc. 0. Then, click the Run button on the top search result. Finally, our application is containerized using Docker, which allows us to easily deploy it in any type of environment. Deploy with Triton Inference Server. MLC LLM compiles and runs code on MLCEngine -- a unified high-performance LLM inference engine across the above Summary of txtai features: 📄 Create embeddings for text snippets, documents, audio, images and video. yaml. Sent HTTP requests. The LLM will consequently transform the description into a richer one that meets the user prompt (e. The object must be of type DocumentFilter | DocumentFilter[]: to match on all types of buffers: llm. It supports various LLM runners, including Ollama and OpenAI-compatible APIs. documentFilter lets you enable suggestions only on specific files that match the pattern matching syntax you will provide. Apr 11, 2024 · 傻瓜 LLM 架設 - Ollama + Open WebUI 之 Docker Compose 懶人包 跑LLM 如果需要它能懂c語言 c++ linux scripts kernel api 。需要額外 2024-04-12 Jeffrey 傻瓜 LLM 架設 - Ollama + Open WebUI 之 Docker Compose 懶人包 to Hank, 文章有安裝 Docker CUDA 支援的相關說明。 2024-04-12 Hank Mar 22, 2024 · おわりに. ) on Intel CPU and GPU (e. env file that docker-compose reads to satisfy the definitions of the variables in the . # Download and run Phi-3 Mini, open model by Microsoft. The container can then be deployed to any environment, regardless of the underlying infrastructure. The rate of improvement of open source LLMs has been nothing short of phenomenal. Run Docker with following command: To run the container via docker compose: Step 1: Create a docker-compose. Aug 14, 2023 · Containerization enhances reproducibility and portability. cpp via brew, flox or nix. llm. Whether you want to write a novel, create a comic, or design a logo, Mintplex Labs' anythingLLM image can help you unleash your imagination. The hub works as a central place where users can explore, experiment, collaborate, and build technology with machine learning. /main --help. June 28th, 2023: Docker-based API server launches allowing inference of local LLMs from an OpenAI-compatible HTTP endpoint. Python 4. Method 2: If you are using MacOS or Linux, you can install llama. With this setup, you can easily run and experiment with vLLM on Windows Home. cpp in a containerized server + langchain support - turiPO/llamacpp-docker-server OpenLLM supports LLM cloud deployment via BentoML, the unified model serving framework, and BentoCloud, an AI inference platform for enterprise AI teams. g downloaded llm images) will be available in that data director . July 2023: Stable support for LocalDocs, a feature that allows you to privately and locally chat with your data. txtai processes can be microservices or full-fledged indexing workflows. - Mintplex-Labs/anything-llm Alternatively, you can run H2O LLM Studio GUI by using our self-hosted Docker image available here. Apache-2. ↪️️ Workflows that join pipelines together to aggregate business logic. I think the current "AI" hype is overblown, but certainly the recent advances in ML around large language models (LLMs) have been impressive. Find out how leveraging Docker optimizes infrastructure, accelerates dev/production deployment, and enhances AI development efficiency. jetson-containers run dustynv/mlc:r36. This portability allows you to develop and test the AI model locally and Oct 5, 2023 · The GenAI Stack simplifies AI/ML integration, making it accessible to developers. Disk storage is proportional to however much data you will be storing (documents, vectors, models, etc). docker exec-it ollama ollama run phi3 # Download and run mistral 7B model, by Mistral AI docker exec-it ollama ollama run mistral If you use the TinyLLM Chatbot (see below) with Ollama, make sure you specify the model via: LLM_MODEL="llama3" This will cause Ollama to download and run this LLM on the Edge: AWQ and TinyChat support edge GPUs such as NVIDIA Jetson Orin. Compiled and ran the model. Mintplex Labs offers a Docker image that allows you to run any LLM model on any text or image input. So, when Machine Learning Compilation (MLC) recently posted an LLM chat demo that can run on the Mali G610 GPU There are different methods that you can follow: Method 1: Clone this repository and build locally, see how to build. The LlamaEdge project supports all Large Language Models (LLMs) based on the llama2 framework. Gradio Server: Try to build your own VLM online demo with AWQ and TinyChat! QServe: 🔥 [New] Efficient and accurate serving system for large-scale LLM inference. cpp with IPEX-LLM on Intel GPU. Apr 1, 2021 · Continuing our analogy, BuildKit is a compiler, just like LLVM . 04 nvidia-smi; Run . interpreter. Tip. 开箱即用 :支持直接上传文档、自动爬取在线文档,支持文本自动拆分、向量化、RAG(检索增强生成),智能问答交互体验好;. Run Llama 3 on Intel GPU using llama. It begins with instructions and tips for Docker installation, and then introduce the available IPEX-LLM containers Without docker Alternatively, you can directly spawn a vLLM server on a GPU-enabled host with Cuda 11. Before starting Weaviate on Docker, ensure that the Docker Compose file is named exactly docker-compose. ; Anyscale/ray-llm is a Docker image that provides a low-latency middleware (LLM) for Ray, a framework for distributed computing and machine learning. It helps deliver the power of NVIDIA GPUs and NVIDIA NIM to Apr 21, 2021 · In order to start building a Docker container for a machine learning model, let’s consider three files: Dockerfile, train. Local-LLM is a simple llama. It uses the 'dalai' [2] tool download and Access the Alpaca model via an webserver. AI/ML Development. The setup. /docker/bash. sh script generates a . cpp is an option, I find Ollama, written in Go, easier to set up and run. Pull the Docker Image: Step 2. LLM Everywhere: Docker and Hugging Face. Llamafile’s concept of bringing Mar 17, 2024 · # run ollama with docker # use directory called `data` in current working as the docker volume, # all the data in the ollama(e. For Docker Desktop on Windows 10/11, install the latest NVIDIA driver and make sure you are using the WSL2 backend. $ docker compose up -d. VSCodeのDev Containersを使って、Dockerの実行環境を準備します。GPUを用いたdockerのインストール方法は以前記事にしたした。 Feb 12, 2024 · Opensource Models supported by vLLM. cpp and ollama with IPEX-LLM. Create a . """ # Get the LLM model from the Docker container. You will be able to use the Docker image with PrivateGPT I created - Quick Setup Feb 26, 2024 · Apple Silicon GPUs, Docker and Ollama: Pick two. Discover how to construct an AI chatbot using Docker, Streamlit, and GPT-3. Run Large Language Models on RK3588 with GPU-acceleration - Chrisz236/llm-rk3588 ollama/ollama is the official Docker image for Ollama, a state-of-the-art generative AI platform that leverages large language models, vector and graph databases, and the LangChain framework. Sep 1, 2023 · The application uses an LLM to generate the poem, and Docker to package the application into a container. The model files must be in the GGUF format. documentFilter: { pattern May 15, 2024 · This quick guide shows how to use Docker to containerize llamafile, an executable that brings together all the components needed to run a LLM chatbot with a single file. LoRAX: Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs. This repository contains a Docker Compose setup for running vLLM on Windows. For Dockerfile 94. 1-base-ubuntu22. 5B, 1. docker-compose run --rm rocm-- Run container using rocm packages How to use Dockerized Anything LLM. This guide will walk you through the process of containerizing llamafile and having a functioning chatbot running for experimentation. 5. Initialize Docker assets. Package your LLM model, OpenLLM dependencies, and other relevant libraries within a Docker container. 06. Method 3: Use a Docker image, see documentation for Docker. docker init provides some default configuration, but you'll need to answer a few questions about your application. At stage seven of nine, the build will appear to freeze as Docker Compose downloads Dalai. RAG: Undoubtedly, the two leading libraries in the LLM domain are Langchain and LLamIndex. Method 4: Download pre-built binary from releases. In this Quick Start Guide, you: Installed and built TensorRT-LLM. Docker Hub Cloud Development. Learn about the integration of graph + Large Language Models (LLM) and how this combination enhances both existing LLM stacks and knowledge graph processes. Gartner indicates that 90% of applications will contain AI/ML by 2027. sh llm-perf-hf:v0. Then, you can start up with the whole setup by running: 1. 以下のLLMを継続事前学習する記事の環境としても使えますので、ぜひ利用してください。. The mission of this project is to enable everyone to develop, optimize, and deploy AI models natively on everyone's platforms. A PyTorch LLM library that seamlessly integrates with llama. Code generation, enpowers code generation tasks, including fill-in-the-middle and code completion. gu ef gj ss ry gn nq tu ss fi