Tencent Hunyuan-Large
(github.com)147 points by helloericsf 7 days ago | 113 comments
147 points by helloericsf 7 days ago | 113 comments
ronsor 7 days ago | root | parent | next |
I will again ask the obligatory question: are model weights even copyrightable? And if not, does the "license" still matter?
parl_match 7 days ago | root | parent | next |
I doubt there will be a satisfactory answer for a long time.
killjoywashere 7 days ago | root | parent |
How's that NYTimes vs OpenAI lawsuit going? Last I can find is things are hung up in discovery: OpenAI has requested potentially a century of NYTimes reporters' notes.
https://news.bloomberglaw.com/ip-law/openais-aggressive-cour...
bdowling 7 days ago | root | parent | next |
Half a century worth of reporters’ notes might be some valuable training data.
neilv 7 days ago | root | parent | prev |
> The AI company asked Judge Sidney H. Stein of the US District Court for the Southern District of New York to step in and compel the Times to produce reporters’ notes, interview memos, and other materials for each of the roughly 10 million contested articles the publication alleges were illegally plugged into the company’s AI models. OpenAI said it needs the material to suss out the copyrightability of the articles. The Times quickly fired back, calling the request absurd.
Can any lawyer on here defend OpenAI's request? Or is the article not characterizing it well in the quote?
warkdarrior 7 days ago | root | parent | prev |
(IANAL)
Model weights could be treated the same way phone books, encyclopedias, and other collections of data are treated. The copyright is over the collection itself, even if the individual items are not copyrightable.
TMWNN 7 days ago | root | parent | next |
>phone books, encyclopedias, and other collections of data are treated
Encyclopedias are copyrightable. Phone books are not.
skissane 7 days ago | root | parent | next |
> Encyclopedias are copyrightable. Phone books are not.
It depends on the jurisdiction. The US Supreme Court ruled that phone books are not copyrightable in the 1991 case Feist Publications, Inc., v. Rural Telephone Service Co.. However, that is not the law in the UK, which generally follows the 1900 House of Lords decision Walter v Lane that found that mere "sweat of the brow" is enough to establish copyright – that case upheld a publisher's copyright on a book of speeches by politicians, purely on the grounds of the human effort involved in transcribing them.
Furthermore, under its 1996 Database Directive, the EU introduced the sui generis database right, which is a legally distinct form of intellectual property from copyright, but with many of the same features, protecting mere aggregations of information, including phone directories. The UK has retained this after Brexit. However, EU directives give member states discretion over the precise legal mechanism of their implementation, and the UK used that discretion to make database rights a subset of copyright – so, while in EU law they are a technically distinct type of IP from copyright, under UK law they are an application of copyright. EU law only requires database rights to have a term of 15 years.
Do not be surprised if in the next couple of years the EU comes out with a "AI Model Weights Directive" establishing a "sui generis AI model weights right". And I'm sure US Congress will be interested in following suit. I expect OpenAI / Meta / Google / Microsoft / etc will be lobbying for them to do so.
ronsor 7 days ago | root | parent | prev |
Encyclopedias may be collections of facts, but the writing is generally creative. Phone books are literally just facts. AI models are literally just facts.
margalabargala 7 days ago | root | parent | next |
> AI models are literally just facts.
Are they, or are they collections of probabilities? If they are probabilities, and those probabilities change from model to model, that seems like they might be copywritable.
If Google, OpenAI, Facebook, and Anthropic each train a model from scratch on an identical training corpus, they would wind up with four different models that had four differing sets of weights, because they digest and process the same input corpus differently.
That indicates to me that they are not a collection of facts.
ronsor 7 days ago | root | parent |
The AI training algorithms are deterministic given the same dataset, same model architecture, and same set of hyperparameters. The main reasons the models would not be identical is due to differing random seeds and precision issues. The differences would not be due to any creative decisions.
margalabargala 6 days ago | root | parent |
Sure, but they don't all use the same algorithm, the same hyperparameters, etc.
At some point, with sufficiently many hyperparameters being chosen, that starts becoming a creative decision. If 5 parameters are available and all are left at the default, then no, that's not creative. If there are ten thousand, and all are individually tweaked to yield what the user wants, is that creative?
Not to mention all of these companies write their own algorithms to do the training which can introduce other small differences.
roywiggins 7 days ago | root | parent | prev |
What if I train an AI model on exactly one copyrighted work and all it does it spit that work back out?
eg if I upload Marvels_Avengers.mkv.onnx and it reliably reproduces the original (after all, it's just a fact that the first byte of the original file is OxF0, etc)
bdowling 7 days ago | root | parent | next |
A work that is “substantially similar” to a copyrighted work infringes that work, under US law, no matter how it was produced. (Note: Some exceptions apply and you have to read a lot of cases to get an idea of what courts find “substantially similar” .)
HWR_14 7 days ago | root | parent |
> no matter how it was produced
IIRC, this is wrong. Independent creation is a valid (but almost impossible to prove) defense in US copyright law.
This example is not an independent creation, but your reasoning seems wrong.
bdowling 5 days ago | root | parent |
I wrote "some exceptions apply" to try to avoid getting into the weeds, but yes, independent creation is an exception. Other exceptions include out-of-term works, public domain, Mise-en-scène (e.g., stock characters), fair use (a huge can of worms), etc.
ronsor 7 days ago | root | parent | prev |
If the sole purpose of your model is to copy a work, then that's copyright infringement.
PeterStuer 6 days ago | root | parent | next |
If the sole purpose of your model is to copy a work, then there would be far easier, cheaper and more reliable techniques to achieve that.
Judge the output, not the system.
roywiggins 7 days ago | root | parent | prev |
Oh, in this case, the model can either reproduce the work exactly, or it can play tic-tac-toe depending on how you prompt it.
ronsor 7 days ago | root | parent |
We can change "sole purpose" to "primary purpose", and I'd argue something that happens 50% of the time counts as a primary purpose.
PittleyDunkin 7 days ago | root | parent | prev |
Who gives a damn about copyright when this is clearly profiting off of someone else's work without compensation? Sometimes the law is inadequate and that's ok—the law just needs to change.
dplavery92 7 days ago | root | parent | prev | next |
The title of Tencent's paper [0] as well as their homepage for the model [1] each use the term "Open-Source" in the title, so I think they are making the claim.
[0] https://arxiv.org/pdf/2411.02265 [1] https://llm.hunyuan.tencent.com/
vanguardanon 7 days ago | root | parent | prev | next |
What is the reason for restrictions in the EU? Is it due to some EU regulations?
ronsor 7 days ago | root | parent | next |
Most likely yes. I don't think companies can be blamed for not wanting to subject themselves to EU regulations or uncertainty.
Edit: Also, if you don't want to follow or deal with EU law, you don't do business in the EU. People here regularly say if you do business in a country, you have to follow its laws. The opposite also applies.
troupo 7 days ago | root | parent |
[flagged]
ronsor 7 days ago | root | parent | next |
I will address both points:
1. No one is training on users' bank details, but if you're training on the whole Internet, it's hard to be sure if you've filtered out all PII, or even who is in there.
2. This isn't happening because no one has time for more time-wasting lawsuits.
troupo 7 days ago | root | parent |
> No one is training on users' bank details, but if you're training on the whole Internet
Tencent has access to more than just bank accounts.
In the West there's Meta that this year opted everyone in their platform into training their AI.
> This isn't happening because no one has time for more time-wasting lawsuits.
No, this isn't happening because a) their training data is, without fail, trained on material they shouldn't have willy-nilly access to and b) because they want to pretend to be open source without being opensource
bilbo0s 7 days ago | root | parent | prev |
??
Doesn't that mean if they used data created by, (or even the data of), anyone in the EU, that they would want to not release that model in the EU?
This sounds like "if an EU citizen created, or has data referenced, in any piece of the data you trained from then..."
Which, I mean, I can kind of see why US and Chinese companies prefer to just not release their models in the EU. How could a company ever make a guarantee satisfying those requirements? It would take a massive filtering effort.
em500 7 days ago | root | parent | next |
This seems to mirror the situation where US financial regulations (FATCA) are seen as such a hassle to deal with for foreign financial institutions that they'd prefer to just not accept US citizens as customers.
troupo 7 days ago | root | parent | prev | next |
> that they would want to not release that model in the EU
They don't release that model in the EU, that's correct
> This sounds like "if an EU citizen created, or has data referenced, in any piece of the data you trained from then..."
Yes, and that should be the default for any citizen of any country in the world.
Instead you have companies like Meta just opting everyone in to their AI training dataset.
> I can kind of see why US and Chinese companies prefer to just not release their models in the EU.
Companies having unfettered unrestricted access to any and all data they want is not such a good thing as you make it out to be
warkdarrior 7 days ago | root | parent |
> > This sounds like "if an EU citizen created, or has data referenced, in any piece of the data you trained from then..."
> Yes, and that should be the default for any citizen of any country in the world.
This is a completely untenable policy. Each and every piece of data in the world can be traced to one or more citizens of some country. Actively getting permission for every item is not feasible for any company, no matter the scale of the company.
andyferris 7 days ago | root | parent | next |
I think that’s kinda the point that is being made.
Technolgy-wise, it is clearly feasible to aggregate the data to train an LLM and to release a product on that.
It seems that some would argue that was never legally a feasible thing to do, based on the training data being impossible to use legally. So, it is the existence of many of these LLMs that is (legally) untenable.
Whether valid or not the point may be mute because, like Uber, if the laws actually do forbid this use, they will change as necessary to accommodate the new technology. Too many “average voters” like using things such as ChatGPT and it’s not a hill politicians will be willing to die on.
troupo 7 days ago | root | parent | prev |
> Actively getting permission for every item is not feasible for any company, no matter the scale of the company.
There's a huge amount of data that:
- isn't personal data
- isn't copyrighted
- isn't otherwise protected
You could argue if that is enough data, but neither you nor corporations argue that. You just go for "every single scrap of data on the planet must be made accessible to supranational trillion-dollar corporations, without limits, now and forever"
7 days ago | root | parent | prev |
blueblimp 7 days ago | root | parent | prev | next |
In Meta's case, the problem is that they had been given the go-ahead by the EU to train on certain data, and then after starting training, the EU changed its mind and told them to stop.
GaggiX 7 days ago | root | parent | prev |
They probably trained on data protected by privacy laws, similar to Meta.
karaterobot 7 days ago | root | parent | prev | next |
Hmm, in fairness I don't see where Tencent is claiming this is open source (at least in this repo; I haven't checked elsewhere). The title of the HN post does make the claim, and that may be controversial or simply incorrect.
swyx 7 days ago | root | parent |
readme: https://github.com/Tencent/Tencent-Hunyuan-Large
> "By open-sourcing the Hunyuan-Large model"
karaterobot 7 days ago | root | parent |
Yeah, I was incorrect above. I just didn't search for the hyphen.
kaliqt 7 days ago | root | parent | prev | next |
I agree, however, Meta is also guilty of this crime as well.
PittleyDunkin 7 days ago | root | parent | prev | next |
[flagged]
foooorsyth 7 days ago | root | parent | prev | next |
[flagged]
mrob 7 days ago | root | parent |
The term "open source" had no significant use to refer to software before the Open Source Initiative started promoting it. Previously, it was only intelligence industry jargon, meaning "publicly available information", which includes software that fails your "can read the source code" test. "Source" was used in the journalistic sense, not as in "source code". The correct term for software that passes your test but does not meet the Open Source Definition is "source available".
kube-system 7 days ago | root | parent | next |
The OSI made a huge mistake in choosing to use an non-trademarkable borrowed term as their own trade industry term. The original (and quite long standing) use to refer to publicly available texts is still widely used, and English isn't a prescriptive language outside of legal frameworks like trademark. This is why you really should pick a trademarkable name when you try to define trade marks.
HDThoreaun 7 days ago | root | parent | prev |
open source means the source code is openly available. That is it. Phrases that have intuitive meaning need to stop being co-opted.
mrob 7 days ago | root | parent |
If that meaning is "intuitive", why was it not used before the Open Source Initiative introduced their definition? The competing uses are the ones co-opting an existing phrase.
foooorsyth 7 days ago | root | parent |
It’s perfectly intuitive to anyone with a brain. Never heard of OSI but they seem just about as pedantic, neurotic, and annoying with language as FSF.
Open source = I can view the source code. That’s what it means, that what it has always meant, and that what it will always mean. Simple as.
DataDaemon 7 days ago | root | parent | prev |
Who cares about EU? They are destroying themselves.
Mistletoe 7 days ago | root | parent | next |
Ironically their policies are why I want to move there with my American dollars. I want to live somewhere that cares about my rights, not the rights of corporations.
CamperBob2 7 days ago | root | parent |
That's fine, but don't complain when you lose access to products and services that are widely available elsewhere.
In particular, restrictions on ML models will leave you without access to extremely powerful resources that are available to people in other countries, and to people in your own country who don't mind operating outside the law. Copyright maximalism is not, in fact, a good thing, and neither is overbearing nanny-statism. Both will ultimately disempower you.
bluefirebrand 7 days ago | root | parent | next |
You have to realize that as an individual, you have no power anyways
It doesn't matter if an individual personally has access to ML models, because government and/or huge corporations will ensure that individuals cannot use them for anything that would threaten government or corporate interests
This unfettered explosion of ML growth is disempowering all of us. Those with power are not using these tools to augment us, they are hoping to replace us.
CamperBob2 7 days ago | root | parent |
This unfettered explosion of ML growth is disempowering all of us.
Never mind that I've gotten things done with ChatGPT that would otherwise have taken much longer, or not gotten done at all. If this is what "disempowerment" feels like, bring it on.
Although the tech is nowhere near ready to make it happen, I would be very happy to be "replaced" by AI. I have better things to do than a robot's job. You probably do, too.
Mistletoe 7 days ago | root | parent | prev |
Can you name some of these extremely powerful resources? I’m fine without access to AI hallucinations and poorly made images with six fingers.
CamperBob2 6 days ago | root | parent | next |
(Shrug) Among other capabilities, the ability to turn English into working code is a big deal. Perhaps you disagree, but if you do, it signals the presence of a gulf too large to cross in an HN thread.
Say what you want about ML models, they will get better at a rate that outpaces any possible self-improvement on your part. (Maybe you've noticed that those jokes about six-fingered people aren't aging particularly well.) The same is true for me, and I don't want to be left behind as that happens. At the national scope, countries that act to restrict or impede progress in this area will be outcompeted dramatically in the long run.
6 days ago | root | parent | prev |
the5avage 7 days ago | root | parent | prev |
Where would you go when you would live there (as a programmer interested in ai)? Just asking for a friend.
7 days ago | root | parent |
a_wild_dandan 7 days ago | prev | next |
The model meets/beats Llama despite having an order-of-magnitude fewer active parameters (52B vs 405B). Absolutely bonkers. AI is moving so fast with these breakthroughs -- synthetic data, distillation, alt. architectures (e.g. MoE/SSM), LoRA, RAG, curriculum learning, etc.
We've come so astonishingly far in like two years. I have no idea what AI will do in another year, and it's thrilling.
csomar 7 days ago | root | parent | next |
It is insane because 52B can run on my current 3 years old laptop. 3B LLMA 3.2 from Facebook can already autocomplete for me. I didn't try this model but if the scores are to be believed, this can give useful and actionable insights into a project source code. Probably not as good as Claude 3.5 but I can run it locally. This is a game changer.
z3ncyberpunk 5 days ago | root | parent | prev |
Moving fast or just completely inefficient
1R053 7 days ago | prev | next |
the paper with details: https://arxiv.org/pdf/2411.02265
They use
- 16 experts, of which one is activated per token
- 1 shared expert that is always active
in summary that makes around 52B active parameters per token instead of the 405B of LLama3.1.
the_duke 7 days ago | prev | next |
> Territory” shall mean the worldwide territory, excluding the territory of the European Union.
Anyone have some background on this?
jmole 7 days ago | root | parent | next |
I believe the EU has (or is drafting) laws about LLMs of a certain size which this release would not comply with.
mattlutze 7 days ago | root | parent | next |
https://artificialintelligenceact.eu/high-level-summary/
There's many places where the model might be used which could count as high-risk scenarios and require lots of controls. Also, we have:
GPAI models present systemic risks when the cumulative amount of compute used for its training is greater than 10^25 floating point operations (FLOPs). Providers must notify the Commission if their model meets this criterion within 2 weeks. The provider may present arguments that, despite meeting the criteria, their model does not present systemic risks. The Commission may decide on its own, or via a qualified alert from the scientific panel of independent experts, that a model has high impact capabilities, rendering it systemic.
In addition to the four obligations above, providers of GPAI models with systemic risk must also:
- Perform model evaluations, including conducting and documenting adversarial testing to identify and mitigate systemic risk.
- Assess and mitigate possible systemic risks, including their sources.
- Track, document and report serious incidents and possible corrective measures to the AI Office and relevant national competent authorities without undue delay.
- Ensure an adequate level of cybersecurity protection."
They may not want to meet these requirements.csomar 7 days ago | root | parent | next |
Good on the chinese for ignoring the insanity in the EU and just releasing this, for us, the public with no strings attached.
lcnPylGDnU4H9OF 7 days ago | root | parent | prev |
> 10^25 floating point operations (FLOPs)
Is there a reason this number was chosen?
troupo 7 days ago | root | parent | prev |
Also existing privacy laws (GDPR) and AI Act (foundational models have to disclose and document their training data)
GaggiX 7 days ago | root | parent | prev |
I imagine they trained on data that is protected by privacy laws, similar to Meta.
helloericsf 7 days ago | prev | next |
- 389 billion parameters and 52 billion activation parameters, capable of handling up to 256K tokens. - outperforms LLama3.1-70B and exhibits comparable performance when compared to the significantly larger LLama3.1-405B model.
Etheryte 7 days ago | root | parent |
It's a bit funny to call the 405B reference "significantly larger" than their 389B, while highlighting the fact that their 389B outperforms the 70B.
rose_ann_ 7 days ago | root | parent | next |
MoE model with 52 billion activated parameters means its more comparable to a (dense) 70b model and not a dense 405b model
phkahler 7 days ago | root | parent | next |
>> MoE model with 52 billion activated parameters means its more comparable to a (dense) 70b model and not a dense 405b model
Only when talking about how fast it can produce output. From a capability point of view it makes sense to compare the larger number of parameters. I suppose there's also a "total storage" comparison too, since didn't they say this is 8bit model weights, where llama is 16bit?
HPsquared 7 days ago | root | parent | prev |
Does this mean it runs faster or better on multiple GPUs?
chessgecko 7 days ago | root | parent |
For decode steps it depends on the number of inputs you run at a time. If your batch size is 1 then it runs in line with active params, then as you get to like batch size 8 it runs in line with all params, then as you increase to 128ish it runs like the active params again.
For the context encode it’s always close to as fast as a model with a similar number of active params.
For running on your own the issue is going to be fitting all the params on your gpu. If you’re loading off disk anyways this will be faster but if this forces you to put stuff on disk it will be much slower.
klipt 7 days ago | root | parent | prev |
It's a whole 4% smaller!
eptcyka 7 days ago | prev | next |
Definitely not trained on Nvidia or AMD GPUs.
acchow 7 days ago | root | parent | next |
How do you know this?
Apparently 20% of Nvidia's quarterly revenue is booked in Singapore where shell companies divert product to China: https://news.ycombinator.com/item?id=42048065
smnrg 7 days ago | root | parent | next |
Sarcasm is a valid theory.
azinman2 7 days ago | root | parent | prev |
I assume it was missing /s
rb2k_ 7 days ago | root | parent | prev |
The readme mentioned H20 GPUs. Nvidia's "China compatible" card (41% Fewer Cores & 28% Lower Performance Versus Top Hopper H100 Config)
1R053 7 days ago | root | parent |
you can get a long way on something with 41% less performance than your favorite supercar...
Tepix 7 days ago | prev | next |
I'm no expert on these MoE models with "a total of 389 billion parameters and 52 billion active parameters". Do hobbyists stand a chance of running this model (quantized) at home? For example on something like a PC with 128GB (or 512GB) RAM and one or two RTX 3090 24GB VRAM GPUs?
bick_nyers 7 days ago | root | parent | next |
You would need to fit the 389B parameters in VRAM to have a speed that is usable. Different experts are activated on a per token basis, so you would need to load/unload a large chunk of the 52B active parameters every token if you were trying to offload parameters to system RAM or SSD. PCIE 4.0 x16 speed is 64GB/s, so you can load those active parameters maybe 1 or 2 times per second, yielding an output speed of 1-2 tokens per second, which most would consider "unusable".
o11c 7 days ago | root | parent |
Does that have to be same-node VRAM? Or can you fit 52B each on several nodes, and only copy the transient state around?
bick_nyers 7 days ago | root | parent |
Generally speaking this works well, pending your definition of node and the interconnect between them. If by node you mean GPU, and you have multiple of them on the same system (interconnect is PCIE, doesn't need to be full speed however for inference), you're good. If you mean multiple computers connected by 1 Gigabit Ethernet? More challenging.
When splitting models layer by layer, users in r/LocalLLaMA have reported good results with as low as PCIE 3.0 x4 as the interconnect (4GB/s). For tensor parallelism, the interconnect requirements are higher but the upside can be faster speeds in accordance to number of GPUs split across (whereas layer by layer operated like a pipeline, so isn't necessarily faster than what a single GPU can provide, even if splitting across 8 GPUs).
1R053 7 days ago | root | parent |
An H100 supports 80 GB of memory. so at FP8 that would allow 3 of the 16+1 models per GPU (assuming around 26B per model), requiring 9 H100s, that usually would not fit one chassis I guess.
Once you have something with 192 GB it gets interesting. You could probably have 7 at FP8 per GPU. At FP16 it probably only would fit 3 per card, requiring 9 again.
I'd say for the current memory layout of cards they missed a little bit the sweet spot. With slightly smaller models or one expert less one should be able to run it on 8 H100s at FP8 or 2 B100s at FP8 or even on 4 B100s at FP16 if I calculated correctly.
bick_nyers 7 days ago | root | parent |
You could always split one of the experts up across multiple GPUs. I tend to agree with your sentiment, I think researchers in this space tend to not optimize that well for inference deployment scenarios. To be fair, there is a lot of different ways to deploy something, and a lot of quantization techniques and parameters.
DrPhish 7 days ago | root | parent | prev | next |
Yes, it can be done. I'm running a 24-channel DDR5 dual-EPYC rig and get good speed on large MoE models. I only use the GPU for context processing.
They're actually a best-case for CPU inference vs dense models. I usually run deepseek 2.5 quanted to q8, but if this model works well I'll probably switch to it once support hits llama.cpp.
Tepix 5 days ago | root | parent |
Interesting, what RAM do you use exactly? 24x 16GB DDR5-6000 DIMMs? It seems that those boards only support up to DDR5-4800: https://geizhals.de/?cat=mbsp3&xf=4921_2%7E493_24x+DDR5+DIMM...
Does the core count matter or can you get away with the smallest 2x EPYC 9015 configuration? What are "good speeds"?
DrPhish 3 days ago | root | parent | next |
I use 24 sticks of ddr5-4800, which gets me up to 9t/s on deepseek 2.5 at q8. 48 threads was optimal in llama.cpp. I would like to move to epyc 9005 chips and ddr5-6000, but it is cost prohibitive with CPUs still over $10k each on eBay.
I followed the guide at https://rentry.co/miqumaxx/
Tepix 20 hours ago | root | parent |
How many cores do your CPUs have? Are you using the 64 core EPYC 9334 mentioned in the linked page? Do that many cores provide a speedup versus having fewer cores?
Tepix 5 days ago | root | parent | prev |
Looks like we will soon get boards supporting 24x DDR5-6000 for the EPYC 9005 CPUs.
lanceflt 7 days ago | root | parent | prev |
RAM for 4-bit is 1GB per 2 billion parameters. So you will want 256GB RAM and at least one GPU. If you only have one server and one user, it's the full parameter count. (If you have multiple GPUs/servers and many users in parallel, you can shard and route it so you only need the active parameter count per GPU/server. So it's cheaper at scale.)
zamadatix 7 days ago | root | parent |
Do the inactive parameters need to be loaded into RAM to run an MoE model decently enough?
iqandjoke 7 days ago | prev | next |
How does it compare with LLama3.2?
Tepix 20 hours ago | root | parent |
Llama 3.2 has the same performance for text as Llama 3.1 and the largest model hasn't been released.
7 days ago | prev | next |
trump2026 7 days ago | prev | next |
[flagged]
helloericsf 7 days ago | root | parent |
Call Jensen and Lisa!lol
trump2026 7 days ago | root | parent |
[dead]
2OEH8eoCRo0 7 days ago | prev | next |
[flagged]
azinman2 7 days ago | root | parent | next |
I just did, and it tells me it has no information on that issue. It also responded back in Chinese to that English query, which either suggests to me that the censorship instruction tuning is heavily weighted towards Chinese, or the model has a hard time staying in English (which I believe has been the case for other Chinese LLMs in the past)
the5avage 7 days ago | root | parent | next |
I once triggered the ChatGPT censorship (by trying to manipulate an image of my face) and it also responded in english to a german query.
dyauspitr 7 days ago | root | parent | prev |
Try testing it on some of the US’ taboo topics like LGBT, feminism, racism etc.
dyauspitr 7 days ago | root | parent |
Interesting, it seems to have the “correct” answers to all those issues. I wonder if this is mostly just a US based model as a base.
azinman2 7 days ago | root | parent |
Go ask it about Chinese issues, war on Ukraine, etc. Whatever it’s based on, it is heavily “safety tuned”
e____g 7 days ago | root | parent | prev | next |
> Is Xi Winnie-the-Pooh?
< 很抱歉,我还未学习到如何回答这个问题的内容,暂时无法提供相关信息
(Google Translate: "I'm sorry, I haven't learned how to answer this question yet and cannot provide relevant information for the time being.")
humptybumpty 7 days ago | root | parent | prev |
Q: What’s the ”tank man”? A: (in chinese)
I'm sorry, but I haven't learnt enough about how to answer this question to be able to provide information about it at this time.
geenkeuse 7 days ago | prev |
[flagged]
mrob 7 days ago | next |
Not open source. Even if we accept model weights as source code, which is highly dubious, this clearly violates clauses 5 and 6 of the Open Source Definition. It discriminates between users (clause 5) by refusing to grant any rights to users in the European Union, and it discriminates between uses (clause 6) by requiring agreement to an Acceptable Use Policy.
EDIT: The HN title was changed, which previously made the claim. But as HN user swyx pointed out, Tencent is also claiming this is open source, e.g.: "The currently unveiled Hunyuan-Large (Hunyuan-MoE-A52B) model is the largest open-source Transformer-based MoE model in the industry".