Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerIT And Programming

Nvidia Transcoding

I'm looking at getting an RTX 4060 ti 12GB for my home server, mainly for AI testing etc but I was curious about the transcoding limit these days for things like Plex I know it used to be 2 but I thought this had been increased to 8 now, also I'm running Plex in docker on a native ubuntu host, the server has an 11th Gen i7 which has an iGPU so I know this can do many more are was more just curios if the Nvidia would be able to do more that the originally quoted 2

submitted by /u/Rooneybuk
[link] [comments]

Selfhosted AI what now

Hi All,

I've gone down the rabbit hole of self-hosted AI and wonder what to try next.

I'm currently running a Mac Mini M2 with 16GB RAM, AMD Ryzen Threadripper 3960X 24c with 64GB RAM with RTX 3080 10GB RAM.

Up to now I've played with.

  • Ollama
  • Ollame WebUI
  • Stable Diffusion with Webui
  • I also got a nice plugin for VSCode to work with Ollama for code completion
  • IOS App to connect back to Ollama

I'm considering trying to train an LLM with my data with no real purpose just to see how it works.

what other cool things could I try now I've got up and running or is there any other self-hosted solution which are worth trying

submitted by /u/Rooneybuk
[link] [comments]
❌
❌