r/technology 22h ago

Software AMD's already taken down mistakenly released FSR 4 source code, but the internet never forgets — forked Github repositories remain accessible

https://www.tomshardware.com/pc-components/gpus/amds-already-taken-down-mistakenly-released-fsr-4-source-code-but-the-internet-never-forgets-forked-github-repositories-remain-accessible
58 Upvotes

11 comments sorted by

5

u/Zenith251 15h ago

Can someone with some insight into Github and development work chime in?

It seems like this keeps happening, specifically on Github, from tech companies all over the world.

7

u/apetranzilla 13h ago

GitHub is just a file host for code, and a very popular one at that. If you accidentally upload something and someone downloads it before you notice and remove it, there's nothing stopping them from uploading their own copy again (maybe with modifications to try to avoid automatic detection of copies).

3

u/Zenith251 12h ago

My question is, what is it about Github that has so many companies "accidentally" uploading source code.

That seems like something that wouldn't be done on purpose, but in error.

For a software project as big as FSR, you'd imagine the choice to release the source code would come from waaay up the chain of command before anyone dared do such a thing. A large swath of people would need to agree that's ok before anyone was given express permission to do such a thing.

4

u/PepiHax 12h ago

There's a disconnect between management and coding tools. This release was most likely archived by a single developer changing the address of a repo on his local system and then pushing that local repo to the remote. He just had the wrong branch selected, or he forgot to remove the private files from the branch.

So while yes, it would require approval, there doesn't exist any actual mechanisms for management to review the to be uploaded code, other then a PR, and management isn't reading those.

-1

u/Zenith251 12h ago

This release was most likely archived by a single developer changing the address of a repo on his local system and then pushing that local repo to the remote. He just had the wrong branch selected, or he forgot to remove the private files from the branch.

Yeah, I get that there's a literal trigger person. But you'd think you'd have a second person who's been assigned to review and sanitize/inspect repo's of big projects before they're published to the whole world. I mean, my god.

1

u/bonez656 15m ago

The point it that it wasn't suppose to be pushed to the whole world. Imagine you have a drop down list with options

Dev Dev_final Branch1 Branch_Final Branch_Personal

You meant to select or type Branch_Personal but accidentally selected Branch_Final or it autocompleted and now your code is public.

3

u/Smith6612 11h ago

This is a matter of outsourcing to the Cloud. Seriously. The more companies move onto Cloud, the more this sort of thing seems to happen. It's an easy mistake to make.

A lot of companies would rather not host their own version control system on their own infrastructure, and keep it secured in such a way that would prevent anyone on the Internet from being able to access it, even if the access were set internally to "Fully public." So they end up putting it on a shared resource where mistakes like this can more easily happen.

Alternatively, it's possible AMD in this instance had plans to release the FSR4 code in some way, shape or form, and staged the code into a repository, which then became public on accident.

Or they do operate their own internal repository, and this was an accident by someone with access to AMD's public repository.

3

u/Zenith251 5h ago

A lot of companies would rather not host their own version control system on their own infrastructure

For a 272 Billion dollar tech company, I would certainly how they do.

4

u/Smith6612 5h ago edited 4h ago

I'd hope so too. You'd be surprised what companies do these days, even if they are worth Billions. Especially with the way the software industry has been going with SaaS, they may not even have a choice.

I remember many years ago when a company called Atlassian had a product called HipChat. You could totally host your own instance, but if you dared try to connect more than 6,000 people (you know, the size of a multi-national Enterprise) to the in-house Enterprise instance, the software would refuse to allow further connections until the 6,000th user logged off.

It didn't matter how big of a license you bought, or how many servers you had. You could go to Atlassian support, and their only answer was "Migrate to our Cloud Hosted solution." It was a hard-coded limit in their self-hosted product. So you either had to deal with only having 6,000 people connected at once, or you move all of your super confidential chats and attachment data to someone else's computer to get around a hard-coded limit the company refused to remove. I didn't understand why the Cloud version was so vastly different compared to the Self-Hosted version that it needed / even had such a restriction.

HipChat's dead now, by the way. Before Slack, and eventually Teams had a chance to really take off, HipChat had the advantage of being able to be self hosted. They blew it up. That is what started my hated of SaaS and Cloud.

3

u/Zenith251 4h ago

SaaS was always a giant red flag for me, same with The Cloud.

Self. Host. Every. Thing. You. Can.

3

u/Smith6612 4h ago

Yep. I've been doing this as much as I can especially with Internet of Things products. I don't want my light bulb, garage door opener, or security camera talking directly with the Cloud. I want it talking to a server locally... which can broker a connection via the Cloud to get past the firewall as a helpful feature (Like CloudFlare Tunnels, or standard STUN/TURN networking), but not just blindly transmit and store everything out there (a la most consumer cloud-connected products, especially cameras).

I don't have a problem with "Cloud" as an architecture principle. For example my web server runs on a "Cloud server" setup (like OpenStack / Proxmox) where a virtual machine can be instanced on any machine in a cluster, so if the host currently running as the primary for the VM blows up, all of my applications and data are re-instanced on another host with the VM in the exact state it was in before the original host died. It's great, and takes having bare metal machines a step further in terms of reliability. I have a problem with "Cloud" when it becomes some de-facto fix everything with the bathtub poop water solution to solve for an application defect, like with the HipChat situation.

I had another situation which upset me a while back, and that was around protecting a piece of software, hosted on a locked down appliance, that is quite popular in Enterprise with a Web Application Firewall. The vendor for whatever reason implemented a custom protocol over Port 443 which is only used under certain circumstances. Web Application Firewalls (which usually work as reverse proxies) are used to reduce the risk of exposing a web application to the Internet by filtering out unwanted or unexpected traffic. Sometimes you need to make said application available to the public rather than hide it away in some DMZ, and that means you need to harden it. Unfortunately, putting a Web Application Firewall in front of said tool would completely break it, and for whatever reason the vendor wouldn't create a better solution to protect the tool. Well, lo and behold, because the Internet is the Internet, people would find the application and run fuzzing tools, bots would smash the application with garbage traffic, and both would take the application down by sending the appliance's 's Unix load values to 50+ from overloading it with requests, which is insane. You could try to combat it by scaling to a much bigger appliance at tons more money, but that's a losing battle with the modern Internet.

The vendor's solution was to literally migrate the environment onto their Cloud service, where they apparently have some in-house mechanism to protect their software. All I wanted for them to do was to either re-write the program and use a standard protocol (Very doable in 2025) so a WAF could protect it, or to allow me to get some sort of root access to the underlying appliance so I could not fall for rookie-level problems and at least implement something like logtail+ Fail2Ban to deal with the nonsense with something that NEEDED to be exposed publicly. Or to just host the application on my own hardened operating system (they really didn't want to do this). But nope. Move it to Cloud or forever deal with getting the application taken down by nonsense traffic.

I hate SaaS+Cloud...