Llama 2 dataset github. See Reproducing Experiments in the Paper for details.
Jupyter Notebook 81. By fine-tuning it on your specific data, you can harness its power for text The following code requries at least 8x4090 to support 80K context. 9%. md) Submit a Pull Request to this repo to check in the metadata. You can use bfloat16 is you want to save memory torch_dtype: " auto " # Performs quatization using bitsandbytes integration. 基于Meta LLaMA实现调优的模型:BELLE-LLaMA-7B-0. Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM) - TingchenFu/LlamaFactory CKIP-Llama-2-7b 是 中央研究院詞庫小組 (CKIP) 開發的開源可商用繁體中文大型語言模型(large language model),以商用開源模型Llama-2-7b以及Atom-7b為基礎,再補強繁體中文的處理能力,並對405個可商用的任務檔案同步進行訓練優化,參數量達70億 (7 billion)。. While we've fine-tuned this model specifically for Vietnamese, its underlying base is primarily trained on English. Input: 高齢の方が罹りやすい病気を3つ挙げてください。. 6) is out! With additional scaling to LLaVA-1. New: Code Llama support! - getumbrel/llama-gpt BERT pretrained models can be loaded both: (i) passing the name of the model and using huggingface cached versions or (ii) passing the folder containing the vocabulary and the PyTorch pretrained model (look at convert_tf_checkpoint_to_pytorch in here to convert the TensorFlow model to PyTorch). It supports the evaluation of LMMs on dozens of public datasets and allows new dataset onboarding, making the dev of new LMMs much faster. Our model is also designed with the purpose of captioning music files to generate Text-to-Music Generation datasets. LLaMA-efficient-tuning: Easy-to-use fine-tuning framework using PEFT (PT+SFT+RLHF with QLoRA) (LLaMA-2, BLOOM, Falcon, Baichuan) awesome-llm and aigc. #448. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. We release LLaVA Bench for benchmarking open-ended visual chat with results from Bard and Bing-Chat. The code, pretrained models, and fine-tuned LangChain & Prompt Engineering tutorials on Large Language Models (LLMs) such as ChatGPT with custom data. This new collection of fundamental models opens the door to faster inference performance and chatGPT-like real-time assistants, while being cost-effective and Pretraining Format. 5, LLaVA-NeXT-34B outperforms Gemini Pro on some benchmarks. OpenChat is an innovative library of open-source language models, fine-tuned with C-RLFT - a strategy inspired by offline reinforcement learning. The 7B model has been selected by default in this project, but it is also configurable with the 13B and 70B versions of the model, depending on your compute Purple Llama. LLaMA-VID training consists of three stages: (1) feature alignment stage: bridge the vision and language tokens; (2) instruction tuning stage: teach the model to follow multimodal instructions; (3) long video tuning stage: extend the position embedding and teach the model to follow hour-long video instructions. Besides, TinyLlama is compact with only 1. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. We use tensor_parallel implemented from this repo because it is much faster than huggingface's device_map and lightweight than vLLM. Ensure your GPU has enough memory. Feb 15, 2024 · Compared to ChatGLM's P-Tuning, LLaMA-Factory's LoRA tuning offers up to 3. Checkout our model zoo here! [2023/07] We extended the support for more LLM models including MPT, Falcon Code Llama was developed by fine-tuning Llama 2 using a higher sampling of code. On this page. py --models hf_llama_7b --datasets mmlu_ppl ceval_ppl -a lmdeploy OpenCompass has predefined configurations for many models and datasets. json". inference_sql_llamaindex::main --query "Which city has the highest population?" I use qlora and the alpaca translated dataset to finetune llama2-7b model at rtx3090(24GB VRAM) with 9 hours. Each file contains five fields, separated by a tab: homograph: The homograph word, in lower case. RTX 3090). Token counts refer to pretraining data only. Train the Llama 2 LLM architecture in PyTorch then inference it with one simple 700-line C file . Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model which can be run on a consumer GPU (e. Oct 6, 2023 · Optionally, you can check how Llama 2 7B does on one of your data samples. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta You signed in with another tab or window. This format is the format used to actually pretrain GPT-like models. Reproduction. . Jul 19, 2023 · In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Finetuning: modal run --detach src. But I can confirm that all the data is coming from the well-known public dataset and we did not use any testing data from the evaluation benchmarks. Prepare the dataset's metadata ( card. The dataset includes over 100B text documents coming from 84 CommonCrawl snapshots and processed using the CCNet pipeline. Jupyter notebooks on loading and indexing data, creating prompt templates, CSV agents, and using retrieval QA chains to query the custom data. , ChatGPT, GPT-4) to self-generate instruction-following data by delicate prompt design. Reduce the `batch_size`. You signed out in another tab or window. Thanks for these references: NTU NLP Lab's alapaca dataset: alpaca-tw_en-align. json due to [Errno 2] No such file or directory: 'data\dataset_info. Better fine tuning dataset and performance. Limited Fine-tuning: The current model has been fine-tuned on a small dataset. About "HTTPError: 404 Client Error" and "OSError: meta-llama/Llama-2-7b does not appear to have a file named config. You can view all of the available llama-hub artifacts conviently in the llama-hub website. In this tutorial, we will use the multi-document natural questions dataset and BookSum dataset. Modify the Model/Training. As the neural net architecture is identical, we can also inference the Llama 2 models released by Meta. Try --streaming and --max_steps 10000 arguments to load your dataset in streaming mode. The goal of this repository is to provide examples to quickly get started with fine-tuning for domain adaptation and how to run inference for the fine-tuned models. py 中 get_preprocessed_arithmetic 函数展示了如何读取自定义数据,并且转化为 llama2 模型的输入。. A self-hosted, offline, ChatGPT-like chatbot. The primary goal of this project is to provide a cleaned and curated version of the Alpaca dataset that will improve the performance of natural language processing models trained on this data. Inference: modal run src. 100% private, with no data leaving your device. py: Finetune on News Group classification dataset. Prompts used: prompts. We aim to summarize extensive documents or data sets efficiently, providing users with concise and relevant summaries. Poetry: Tool for dependency management and Python packaging Languages. Particularly, we're using the Llama2-7B model deployed by the Andreessen Horowitz (a16z) team and hosted on the Replicate platform. Falcon-7B & LLAMA 2: State-of-the-art language models, prepped and ready for legal training. That's where LlamaIndex comes in. RedPajama-V2 is an open dataset for training large language models. See Reproducing Experiments in the Paper for details. Our Dataset : Comprehensive Indian law knowledge, spanning constitutional law, civil rights, and more! With the application of methods such as LoRA fine-tuning, full-parameter instruction fine-tuning, and secondary pre-training, we cordially invite you to download and utilize the associated datasets, training guides, and model parameters. We release Meditron-7B and Meditron-70B, which are adapted to the medical domain from Llama-2 through continued pretraining on a comprehensively curated medical corpus, including selected PubMed papers and abstracts, a new dataset of internationally-recognized medical guidelines, and a general domain corpus. It is designed to be easy to use and easy to extend. g. Mar 24, 2023 · Databricks’ Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Python 18. Hence, our project, Multiple Document Summarization Using Llama 2, proposes an initiative to address these issues. For ease of use, the examples use Hugging Face converted versions of the models. [2023/07] 🔥 We added AWQ support and pre-computed search results for Llama-2 models (7B & 13B). Sep 12, 2023 · Predominant Focus on English: The original version of Llama 2 was chiefly focused on English-language data. Llama Datasets 🦙📝. Method 2: If you are using MacOS or Linux, you can install llama. There are different methods that you can follow: Method 1: Clone this repository and build locally, see how to build. 🦙Chinese-Llama-2 旨在进一步增强Llama-2大模型的中文理解、生成、翻译等能力。 As the Llama 2 model is fine-tuned over more epochs, it continues to improve its accuracy on the SQuAD v2 task, up until about 8 epochs. Compared to the first generation of the project, the main features include: 📖 Optimized Chinese Vocabulary. ). cpp via brew, flox or nix. tsv. DCA can be seamlessly integrated with (1) popular extrapolation methods such as Positional Interpolation (PI), NTK-Aware RoPE, and YaRN; and (2) widely-used libraries for memory-efficient inference like FlashAttention and vLLM. This repository contains the code to fine-tune the Llamav2 language model on custom data for text classification tasks. In contrast to the previous version, we follow the original LLaMA-2 paper to split all numbers into individual digits. Oct 3, 2023 · We adopted exactly the same architecture and tokenizer as Llama 2. Output generated by This is an implementation of fine-tuning the Llama-2 model with the QLoRA (Quantized LoRA) framework using a specific version of Llama and a particular dataset all from HuggingFace Hub. - AIAnytime/ChatCSV-Llama2-Chatbot This guide provides information and resources to help you set up Llama including how to access the model, hosting, how-to and integration guides. [08. It's simply a whole bunch of text with a BOS and EOS token to mark the beginning of the text. --local-dir (Optional) Local directory path where the model or dataset will be stored. To perform inference using the fine-tuned Llama-2 model, follow these steps: Ensure you've successfully fine-tuned Llama-2 as explained in the Fine-tuning Llama-2 section. Example: hfd bigscience/bloom-560m --exclude *. If you have 4x80G A100 you can make it to at least 128K. Llama-2-7B-Chat: Open-source fine-tuned Llama 2 model designed for chat dialogue. These notebooks walk through downloading and configuring the Llama 2 model from HuggingFace, preparing a custom dataset, and fine-tuning the pretrained base model against this new dataset. We release the resources associated with QLoRA finetuning in this repository under GLP3 license. py: Finetune on Samsum summarization dataset. py We enhance our previous tokenizer in vietnamese-llama2-4b-40GB by training SentencePiece on a more extensive collection of clean Vietnamese documents spanning diverse domains such as news, books, stock, finance, and laws. For detailed information on model training, architecture and parameters, evaluations, responsible AI and safety refer to our research paper. 1%. Llamav2 is a state-of-the-art natural language processing model developed for a wide range of NLP tasks. py: Zero-shot, Few-shot and instruction tuning for The 'llama-recipes' repository is a companion to the Llama 2 model. Llama 2 is being released with a very permissive community license and is available for commercial use. So Step 1, get the Llama 2 checkpoints by following the Meta instructions. Better base model. ValueError: Cannot open data\dataset_info. [23/07/29] We release two instruction-tuned 13B models at Hugging Face. The text column (prompts) in the csv file is then used to train the model. You can list all available model and dataset configurations using the tools . The 'llama-recipes' repository is a companion to the Meta Llama 2 and Meta Llama 3 models. For example, LLaMA's 13B architecture outperforms GPT-3 despite being 10 times smaller. I have read the README and searched the existing issues. Getting started with Meta Llama. Resources. Patients can interact with the ChatDoctor model through a chat interface, asking questions about their health, symptoms, or medical conditions. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA. Once we have those checkpoints, we have to convert them into Llama-2-7B-32K-beta model can be fine-tuned using various datasets. Out of these, there are 30B documents in the corpus that additionally come with quality signals, and 20B documents that are deduplicated. 1B parameters. This repo is a companion repo to the llama-hub repo meant to be the actual storage of data files associated to a llama-dataset. Reload to refresh your session. Demo apps to showcase Meta Llama3 for WhatsApp Llama-2-7B-32K-Instruct is fine-tuned over a combination of two data sources: 19K single- and multi-round conversations generated by human instructions and Llama-2-70B-Chat outputs . Tamil LLaMA is now bilingual, it can fluently respond in both English and Tamil. Supports default & custom datasets for applications such as summarization and Q&A. TinyChat enables efficient LLM inference on both cloud and edge GPUs. In the first generation of the project, we expanded Chinese words and characters for the first-generation Chinese LLaMA model (LLaMA: 49953, Alpaca: 49954 #Model args model_name_or_path: meta-llama/Llama-2-7b-chat-hf # Dtype in which we will load the model. We also support and verify training with RTX 3090 and RTX A6000. Llama 2 family of models. In the meantime you can run each step individually as below: Loading data: modal run src. Downloading and converting the base model The high-level steps are: Create a LabelledRagDataset (the initial class of llama-dataset made available on llama-hub) Generate a baseline result with a RAG system of your own choosing on the LabelledRagDataset. For example, if you have a dataset of users' biometric data to their health scores, you could test the following eval_prompt: eval_prompt = """ Given the following biometric data, score the users' health, from 0-100. The MU-LLaMA model is Music Understanding Language Model designed with the purpose of answering questions based on music. [2023/07/19] 🔥 We release a major upgrade, including support for LLaMA-2, LoRA training, 4-/8-bit inference, higher resolution (336x336), and a lot more. We construct prompts for LLAMA-2, a Decoder-Only architecture, using continuous text data to fine-tune and update model weights. llama2_summarization. py, with the following command:! Recently, it has attracted significant attention to exploiting much larger and more powerful LLMs (e. load_data_sql. The app includes session chat history and provides an option to select multiple LLaMA2 API endpoints on Replicate. 1] for instruction-based generation of SQL code from natural language queries. The ChatDoctor model is designed to simulate a conversation between a doctor and a patient, using natural language processing (NLP) and machine learning techniques. ) Meditron is a suite of open-source medical Large Language Models (LLMs). PEFT & QLoRA : The dream duo for memory-efficient and high-performance model fine-tuning. The objectives of this project are threefold: Implement the Llama 2 model using JAX to enable efficient training and inference on Google Cloud TPU; Develop a high-quality codebase that serves as an exemplary implementation of the Transformer model using JAX; Facilitate the identification of Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. [11. Leverages publicly available instruction datasets and over 1 million human annotations. Steps for Running. Live demo: LLaMA2. Jan 17, 2024 · Reminder. Llama-2-chat models are supported! Check out our implementation here. 💫 Finetuning on a dataset using Colab Demo 2. Jul 18, 2023 · Llama 2 is a family of state-of-the-art open-access large language models released by Meta today, and we’re excited to fully support the launch with comprehensive integration in Hugging Face. For the full list of wordids, see wordids. You might think that you need many billion parameter LLMs to do anything useful, but in fact very small LLMs can have surprisingly strong performance if you make the domain narrow enough (ref: TinyStories paper). 2 models are out. Apr 18, 2024 · Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. This folder contains ready-to-use scripts, using which you can do the following: Finetuning Llama2 using PeFT methodology QLoRA: llama2_classification. Sep 25, 2023 · Currently open-sourcing the data for Colossal-LLaMA-2 is not on our agenda yet, but we are discussing this so we might open-source it in the future. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety Model Architecture Llama 2 is an auto-regressive language optimized transformer. As with Llama 2, we applied considerable safety mitigations to the fine-tuned versions of the model. - GitHub - KevKibe/Finetuning-Llama2-with-QLoRA: This is an implementation of fine-tuning the Llama-2 model with the QLoRA (Quantized LoRA) framework using a This repository provides a comprehensive guide and implementation for fine-tuning the LLAMA 2 language model using custom datasets. json and README. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. 14] ⭐️ The current README file is for Video-LLaMA-2 (LLaMA-2-Chat as language decoder) only, instructions for using the previous version of Video-LLaMA (Vicuna as language decoder) can be found at here. Clear cache. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint. Each Knowledge Editing scenario comprises of three components: Editor: such as BaseEditor ( Factual Knowledge and Generation Editor This project presents SQL-LLaMA, a Text-2-SQL model based on LLaMA-2 [Ref. LLaMA is creating a lot of excitement because it is smaller than GPT-3 but has better performance. Train and run a small Llama 2 model from scratch on the TinyStories dataset. file 两个参数 It reads the original dataset in jsonl file and constructs prompts for the model to train, and saves the data along with prompts as a csv file. Models generated with these datasets are not typically as useful outside of few-shot and zero-shot learning This chatbot is created using the open-source Llama 2 LLM model from Meta. Powered by Llama 2. Finetune Falcon-7B on Your GPU with TRL and QLoRA: A blog about tuning falcon-7b on your consumer GPU. Better tokenizer. The self-instruct dataset was created by using Llama 2 to create interview programming questions and then using Code Llama to generate unit tests and solutions, which are later evaluated by executing the tests. Alpaca-LoRA-Output: Three diseases that are more common in the elderly are heart disease, stroke, and dementia. In addition, we release the FIN-LLAMA model family for base LLaMA model sizes of 7B, 13B, 33B, and 65B. EasyEdit contains a unified framework for Editor, Method and Evaluate, respectively representing the editing scenario, editing technique, and evaluation method. Closed. LLaMA2-Every Resource you need. Jul 18, 2023 · In this section, we look at the tools available in the Hugging Face ecosystem to efficiently train Llama 2 on simple hardware and show how to fine-tune the 7B version of Llama 2 on a single NVIDIA T4 (16GB - Google Colab). Aug 23, 2012 · [23/07/31] Now we support dataset streaming. We will actively maintain this project and hope base_model is a path of Llama-2-70b or meta-llama/Llama-2-70b-hf as shown in this example command; lora_weights either points to the lora weights you downloaded or your own fine-tuned weights; test_data_path either points to test data to run inference on (in NERRE repo for this example) or your own prompts to run inference on (Note that this is defaulted to a jsonl file each having text under We refer to the Llama-based model with dual chunk attention as ChunkLlama. You switched accounts on another tab or window. It provides the following tools: Offers data connectors to ingest your existing data sources and data formats (APIs, PDFs, docs, SQL, etc. safetensors hfd meta-llama/Llama-2-7b --hf_username myuser --hf_token mytoken -x 4 hfd lavita/medical-qa-shared-task-v1-toy --dataset Extracting relevant data from a pool of documents demands substantial manual effort and can be quite challenging. tsv and llama_hd_eval. Open. The model uses MERT + LLaMA as the backbone and employs an adapter to encoperate music context information Aug 25, 2023 · For the instruction model, they used two datasets: the instruction tuning dataset collected for Llama 2 Chat and a self-instruct dataset. ujjwalkarn mentioned this issue on Sep 8, 2023. See these Hugging Face Repos (LLaMA-2 / Baichuan) for details. Alpaca-LoRA-Output: The capital of Japan is Tokyo. Llama 3 uses a tokenizer with a vocabulary of 128K tokens, and was trained on on sequences of 8,192 tokens. This project launches the Chinese LLaMA-2 and Alpaca-2 models based on Llama-2. 提供大眾 Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. These models are intended for purposes in line with the LLaMA license and require access to the LLaMA models. Some of the steps below have been known to help with this issue, but you might need to do some troubleshooting to figure out the exact cause of your issue. Sadly there is a bit of friction here due to licensing (I can't directly upload the checkpoints, I think). The 'llama-recipes' repository is a companion to the Llama 2 model. ChatCSV bot using Llama 2, Sentence Transformers, CTransformers, Langchain, and Streamlit. 欢迎来到Llama中文社区!我们是一个专注于Llama模型在中文方面的优化和上层建设的高级技术社区。 已经基于大规模中文数据,从预训练开始对Llama2模型进行中文能力的持续迭代升级【Done】。 Tamil LLaMA v0. Fine-Tune-LLamaV2-on-Custom-Data. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Additionally, you will find supplemental materials to further assist you while building with Llama. wordid: A string identifying a pronunciation of the homograph word. json'. ipynb for full instructions. Jul 19, 2023 · Please let me know. We performed a grid search for the optimal learning rate (happened to be the same for all three models) using the 10M BabyLM dataset (strict-small task). Provides ways to structure your data (indices, graphs) so that this data can be easily used with LLMs. Method 3: Use a Docker image, see documentation for Docker. By removing errors and inconsistencies, the goal is to improve performance of the fine-tuned llama models and reduce the likelihood of hallucinations. Purple Llama is an umbrella project that over time will bring together tools and evals to help the community build responsibly with open generative AI models. This is an experimental Streamlit chatbot app built for LLaMA2 (or any other LLM). daspartho mentioned this issue on Oct 13, 2023. Contribute to philschmid/sagemaker-huggingface-llama-2-samples development by creating an account on GitHub. First Example is not working anarchy-ai/LLM-VM#318. For the LLaMA2 license agreement, please check the Meta Platforms, Inc official license documentation on their website. In this repository I release model weights, the dataset and the code used for finetuning the LLaMA-2 7B and 13B language model. It also tends to adhere more strictly to the output format, to the point of not returning an explanation in most cases (although it is still possible to ask it to produce its reasoning with a follow up prompt). 在准备好数据处理函数之后,用户可以通过 --dataset 和 --custom_dataset. 6M-enc, BELLE-LLaMA-7B-2M-enc, BELLE-LLaMA-7B-2M-gptq-enc, BELLE-LLaMA-13B-2M-enc, BELLE-on-Open-Datasets 以及基于LLaMA做了中文词表扩充的预训练模型BELLE-LLaMA-EXT-7B。 请参考Meta LLaMA的License,目前仅供学习交流。请严遵守LLaMA的使用限制。 To make the comparison with GPT-2 more accurate, we enabled weight tying in both Llama and GPT-J (this feature is disabled by default). 7 times faster training speed with a better Rouge score on the advertising text generation task. [1/30] 🔥 LLaVA-NeXT (LLaVA-1. Run the inference script, infer. - EN10/BabyLlama GitHub community articles The dataset comes in two files: llama_hd_train. Jun 3, 2024 · You signed in with another tab or window. This project is the JAX implementation of Llama 2. We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM and Orca — producing instructions by querying a powerful License and Intended Use. We train the models on cloud TPU-v4s using EasyLM, a JAX based training pipeline we developed for training and fine-tuning large language models. python run. Feb 7, 2024 · 💫 Zero-shot forecasting on a dataset of any frequency for any prediction length, using Colab Demo 1. Lower the Precision. Base models are trained with this format of dataset. Based on pythia-12b, Dolly is trained on ~15k instruction/response fine tuning records databricks-dolly-15k generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification llama-recipes 提供了一个接口,允许用户可以自己设计训练数据的输入格式,在 dataset. tsv in the repo of the Wikipedia dataset. Our models match or betters the performance of Meta's LLaMA 2 is almost all the benchmarks. By using Low-Rank Adaptation (LoRA) and Quantized Low-Rank Adaptation (QLoRA), it enables efficient and scalable model fine-tuning, making it suitable for resource-limited environments. [23/07/19] Now we support training the LLaMA-2 models in this repo. 💫 Reproducing experiments in the paper using the released scripts. json: ntunpllab translate Stanford Alpaca 52k dataset; Chinese Llama 2 7B train. This app was refactored from a16z's implementation of their LLaMA2 Chatbot to be light-weight for deployment to the Streamlit Community Cloud. Japanese-Alpaca-LoRA-Output: 「日本の首都は東京です。. Like tools, loaders, and llama-packs, llama-datasets are offered through llama-hub. The initial release will include tools and evals for Cyber Security and Input/Output safeguards but we plan to contribute more in the near future. ai. Please load the notebook tutorial. 03] 🚀🚀 Release Video-LLaMA-2 with Llama-2-7B/13B-Chat as language decoder You signed in with another tab or window. finetune_sql. Firstly, you need to get the binary. 2 days ago · --dataset (Optional) Flag to indicate downloading a dataset. LlamaIndex is a "data framework" to help you build LLM apps. Grouped-Query Attention (GQA) is used for all models to improve inference efficiency. It is a significant upgrade compared to the earlier version. gs er ks ii kz ug jh ci wo fi