Exploring LoRA – Part 1: The Idea Behind Parameter Efficient Fine-Tuning
(medium.com)161 points by aquastorm 5 days ago | 16 comments
161 points by aquastorm 5 days ago | 16 comments
3abiton 3 days ago | root | parent |
Thanks for sharing. This got me thinking, why is medium so used for such technical articles? Especially that lots of articles get blasted behind a paywall for me recently.
ivanmontillam 2 days ago | root | parent | next |
Making it less accessible, right? I was thinking exactly the same.
anshumankmr 2 days ago | root | parent | prev |
short answer: to make money
jwildeboer 3 days ago | prev | next |
(Not to be confused with LoRa, (short for long range) which is a spread spectrum modulation technique derived from chirp spread spectrum (CSS) technology, powering technologies like LoRaWAN and Meshtastic)
SeasonalEnnui 3 days ago | root | parent | next |
This gets me every time. I expect to see something interesting and it turns to be the other one. One is a fantastic thing and the other is mediocre, pick which way round at your discretion!
sva_ 3 days ago | root | parent | next |
Pretty simple to spot LoRa vs LoRA.
rkagerer 3 days ago | root | parent |
Memory mnemonic: Capital A for "AI"
pavlov 3 days ago | root | parent | prev |
What exactly is the confusion? Does “parameter efficient fine-tuning” mean anything in context of the other Lora? If not, then it’s probably obvious which one this is about.
mrgaro 3 days ago | root | parent |
Actually it does: Lora the radio protocol has parameters to tune. Usually both sender and receiver needs to match these, so I read this like a method how these could be automatically tuned based on the distance and radio environment.
FusspawnUK 3 days ago | root | parent | prev |
really wish they had come up with another name. googling gets annoying
the__alchemist 3 days ago | root | parent |
Contributors: They both use mixed capitalization. They have partially-overlapping audiences.
danielhanchen 3 days ago | prev | next |
Super cool series of articles! :)
JacksonWaschura 3 days ago | prev | next |
[dead]
gautambt 4 days ago | prev |
Generated notebooklm here: https://notebooklm.google.com/notebook/7094a513-af83-4c5b-a4...
khazhoux 3 days ago | root | parent |
What is this? Is this a google summarization service?
threepi 4 days ago | next |
Author here. Happy to see this posted here. This is actually a series of blog posts:
1. Exploring LoRA — Part 1: The Idea Behind Parameter Efficient Fine-Tuning and LoRA: https://medium.com/inspiredbrilliance/exploring-lora-part-1-...
2. Exploring LoRA - Part 2: Analyzing LoRA through its Implementation on an MLP: https://medium.com/inspiredbrilliance/exploring-lora-part-2-...
3. Intrinsic Dimension Part 1: How Learning in Large Models Is Driven by a Few Parameters and Its Impact on Fine-Tuning https://medium.com/inspiredbrilliance/intrinsic-dimension-pa...
4. Intrinsic Dimension Part 2: Measuring the True Complexity of a Model via Random Subspace Training https://medium.com/inspiredbrilliance/intrinsic-dimension-pa...
Hope you enjoy reading the other posts too. Merry Christmas and Happy Holidays!