By now we’re all accustomed to the capabilities of generative AI for creating photos. For some duties, like casting an current picture in a recognizable artwork fashion, it really works effectively. Way more than that and it encounters limitations: advanced prompts typically don’t return precisely what you imagined and iterating on a failed immediate can rapidly turn out to be time-consuming.
In an try to make picture technology extra dependable at scale, ComfyUI gives a visible, node-based workflow builder that ensures sure properties are current or absent within the ensuing picture. ComfyUI cases configured for nameless entry permit anybody on the web to see these workflows and the photographs they generate. In our analysis, we discovered no less than 60 ComfyUI servers uncovered to the web and leaking knowledge. You’ll be able to most likely guess one factor they have been getting used to generate… however there are additionally some surprises.
What’s ComfyUI?
ComfyUI is a comparatively new know-how, first launched in the beginning of 2023. Driving the wave of curiosity in generative AI, neighborhood adoption continued via 2024, incomes 77k stars on Github and garnering help from mainstream company software program like Nvidia RTX Remix.
ComfyUI gives a visible, node-based editor for outlining operations in a workflow. Thus the interface consists of lists of fashions, nodes, and workflows the person has created. In an exemplar workflow pictured under, a person may choose completely different fashions to carry out completely different steps in producing a picture. They will additionally outline each constructive and detrimental prompts to make sure the output is to their liking. When this interface is uncovered, it permits nameless customers the power to see the fashions, nodes, saved prompts, workflows and even to generate their very own photos with the host’s computing sources.
Sources for menace intelligence groups
Menace intelligence groups seeking to assess their very own publicity to open ComfyUI cases can search Shodan for title:”ComfyUI”, which returns round 2,800 websites. The overwhelming majority of those are lifeless ends– roughly 800 return login pages, which will be eliminated by updating the search to title:”ComfyUI” -title:”Login”. Other than the dangers of knowledge leakage, these authenticated cases should be related for figuring out if a vendor is utilizing ComfyUI in some capability.
Of the remaining 2,000 websites, many are honeypots, require authentication, or are in any other case inaccessible. Finally, we discovered 93 IP addresses permitting unauthenticated entry to ComfyUI cases. An extra 28 of them timed out whereas trying to load the UI and 5 displayed proof of being honeypots after loading. That left us with 60 unauthenticated ComfyUI servers leaking knowledge and/or accessible for abuse.
The distribution of IP addresses within the Shodan search reveals a focus in China, which each is and isn’t deceptive. A lot of the ComfyUI IP addresses in China are honeypots, however leaking cases disproportionately are hosted in China or use Chinese language of their UI settings or file names. These cases are additionally extra prone to be operated by small AI/ML distributors moderately than people, and thus pose some provide chain threat.
Geographic distribution of Shodan outcomes for title:ComfyUIRisks of exposing ComfyUI Expanded assault floor
Exposing further companies, even with authentication enabled, will increase a company’s assault floor. In inspecting the IP addresses the place ComfyUI was being hosted, we discovered they typically contained further companies associated to the AI technology pipeline. On this instance, ComfyUI generated photos for a clothes retailer on one port whereas a login interface for a DeepSeek interface one way or the other associated to an RF chip producer was uncovered on one other.
AI-generated photos swapping t-shirts on a mannequin. Zoomed out view of workflow on the best.
Login to company infrastructure for a seemingly completely different enterprise on a unique port. Provide chain
The businesses working the ComfyUI workflows typically hosted their public web site and different enterprise functions on the identical IPs. In reviewing these, it was clear that these weren’t hobbyist cases, however makes an attempt to supply AI picture technology as a business service. The danger of uncovered ComfyUI cases mostly happens within the provide chain, because the software program permits enterprising people to rapidly arrange picture producing pipelines that may then be provided as SaaS companies to others. Cases sharing their {hardware} metadata confirmed some spectacular specs, like a NVIDIA A100-SXM4-80GB GPU with 1 TB of RAM, that point out both a really wealthy particular person, repurposed cryptominer, or a stage of funding commensurate with a small enterprise.
Immediate leaks
ComfyUI embeds picture technology metadata within the PNGs it creates. If these prompts include model key phrases or different inner info, they are often leaked when the photographs are shared. In a single case, the inputs included textual content prompts and base64-encoded photos of actual ladies, each of which might be recovered from the metadata of the generated PNGs. ComfyUI-generated photos within the AI provide chain may subsequently persist details about the photographs’ provenance of which company customers may not bear in mind. Relying on what that immediate knowledge is, it would lead on to our subsequent threat.
Viewing a PNG in a hex editor to learn the immediate metadata, together with the ComfyUI workflow and the textual content immediate. Reputational harm
Even if you’re not producing pornography with ComfyUI, different finish customers of a shared AI picture technology vendor may be. If the seller’s ComfyUI occasion is uncovered, you may wind up within the uncomfortable place of getting your model related to that content material. The vary of content material at the moment being generated on such multi-tenant ComfyUI installations ranges from the innocuous to the extraordinarily disagreeable.
In our survey, we noticed 4 IP addresses producing pornography, often within the hentai/anime fashion. 4 IP addresses could not appear to be very many, however every of these techniques was outputting a constant stream of pornography, leading to a large quantity of content material. On these 4 IPs we noticed dozens of various AI fashions devoted to particular express eventualities. The existence of such fashions will not be a secret– they are often discovered even on industry-standard websites like Hugging Face (instance and warning: NSFW)– although it may be somewhat disconcerting to see how a lot improvement effort has gone into the creation of sources for mass producing hyper-specific pornography. Why somebody would wish to produce extra pornography than they will probably devour would require a extra psychoanalytical method than we soak up our analysis, so for now we are going to go away it as a threat with which you don’t need what you are promoting related.
AI-generated photos together with an American trying fisherman, Donald Trump in entrance of the White Home, and scantily clothed or bare anime ladies. These photos are doubtless being generated by separate customers of an AI picture technology service. Communications and deepfakes
Considerably to our shock, not one of the cases we surveyed have been creating pornographic deepfakes, no less than on the time once we noticed them. Nonetheless, one occasion was creating real looking movies of a feminine Chinese language spokesperson in a navy uniform. The venture used separate workflows to animate her physique and face based mostly on fashions and reference movies in an effort to match regular human motion for the phrases she was talking. At first look, this venture seemed to be creating deepfakes of a Chinese language navy spokesperson, which might be ripe for abuse, however after translating two of the movies we realized they have been bulletins associated to college occasions. Nonetheless, the power of nameless customers to edit the prompts may create unintended results for the customers of this media.
A fancy workflow utilizing a green-screened reference video to animate an enter picture.
Assortment of AI generated movies based mostly on a nonetheless picture, pose skeletons for modeling actions in enter movies, and one of many individually animated heads for matching mouth actions to speech. Conclusion
Within the case of knowledge leaking from AI picture technology software program, we will’t ignore that the dangers transcend stopping the publicity of strictly delicate knowledge like PII and credentials. The content material being created by genAI infrastructure configured for public viewing can erode our notion of, and belief in, that vendor. Uncovered ComfyUI interfaces reveal these photos, and people photos can in flip reveal the prompts used to generate them. They could additionally reveal different leaks in a vendor’s assault floor, linking the ComfyUI picture technology to distributors that seem to supply unrelated companies.
Evidently, if you’re utilizing ComfyUI, be certain that authentication is enabled. That threat ought to be simply treatable. The opposite dangers obvious from surveying these exposures is probably not really easy. In a number of instances, the picture technology companies have been hosted on the identical IP addresses as seemingly unrelated companies, crossing over between manufacturing, consulting, and shopper merchandise. The danger of boutique AI/ML distributors with poor separation of duties between a number of enterprise ventures requires further diligence from vendor threat administration groups, however one which surveying uncovered ComfyUI cases may help spotlight.