-
Running a Local LLM on an Intel iGPU with llama.cpp SYCL and Hermes
How to run Qwen3.5 on an Intel Iris Xe GPU with llama.cpp, deploy it via Ansible, and wire it up to Hermes as a homelab AI agent
-
Open Source India 2022
The 19th edition of Open Source India
-
IndiaFOSS 2.0
The 2nd edition of Free and Open Source Software conference by the FOSS United community
-
Three months teaching in India
My experience in teaching software engineering for the third month at a rural college in India
Newer