Skip to Main Content
Improve Capture One

Request a new feature, or support for a camera/lens that you would like to use in Capture One.

Status Future consideration
Workspace Feature requests
Categories Capture One Pro
Created by Guest
Created on Oct 24, 2023

Content Credentials

Hello everyone! Thank you for the privilege of your time. I, like many of you, have been a super-long time loyal user of Capture One Pro (C1PRO) going back to version 3.7.8 days circa 2008. I've since loyally upgraded since. I stopped upgrading after version 12; however, to keep up, I finally caved in and upgraded to version 22.

I am sharing this about my usage and experience with C1PRO.

Would anyone find helpful, if any and all outputs (exports) made in C1PRO contain something called "Content Credentials" like Adobe Photoshop has? This is in the context of making sure anyone knows whether our images are fully AI-made or use any AI-generated elements?

I know Artificial Intelligent (AI) is going to to stay. Even the forthcoming update of C1PRO has fully embraced and incorporates AI Masking. This feature has been openly announced and advertised by Capture One themselves in their marketing promotional emails.

  • Admin
    Capture One Product Manager
    Reply
    |
    Jul 26, 2024

    Hi everyone, I just wanted to let you know we are actively looking into this.

    I am not marking this feature as "in the works" yet, as the level of complexity to make this happen is very high - this is quite different from what we normally do. We're hoping to make progress in the next few weeks.

    I do however have a question for all of you: it seems more than likely that Capture One is never the first link in the chain of Content Credentials, cameras (or image generators) will be. Would it make sense to enable the feature of reading and writing those only for files that already contain content credentials?

    4 replies
  • Guest
    Reply
    |
    Feb 25, 2024

    I may have missed it, but I would love to hear from the C1 folks if they are in fact looking at this for the software. Will keep posting to keep this thread alive. I have found the verify site is very easy to use and while the technology will take awhile to get in cameras, it will eventually get into phones and other devices (I believe). With today’s international political climates and the advance of AI gen images, this technology will likely need to adapt over time, but the framework makes sense. Let’s get it in the software and improve as we go.

  • Guest
    Reply
    |
    Feb 16, 2024

    I'd also support this proposal. And I think it will be a make or break issue with surprising speed. This will be a must have throughout our workflow in the next few years I expect.  

  • Eric Valk
    Reply
    |
    Feb 11, 2024

    I have already made such a proposal but without much support yet.

    https://support.captureone.com/hc/en-us/community/posts/16207906940957-IPTC-Field-for-Data-Mining

  • Raymond Harrison
    Reply
    |
    Feb 11, 2024

    I definitely support this post and would appreciate this capability.

  • Eric Valk
    Reply
    |
    Feb 11, 2024

    I support this proposal (and have voted for it and followed)
    The complementary question is shall we have a mechanism to give or deny permission to use our images for AI Training? 

  • Guest
    Reply
    |
    Feb 11, 2024

    Me too, I think C1 should join the Content Authenticity Initiative in the near future.

     

  • Richard Huggins
    Reply
    |
    Feb 10, 2024

    A bit late to this maybe. I'm in a camera club that is toying with making content credentials compulsory for competition entries, and it may extend to more general competitions. Most photographers here (Australia) use Adobe which has it as beta at the moment so there is little resistance. We need something.

  • Guest
    Reply
    |
    Dec 19, 2023

    I second this -and the other- comment. The sooner this technology is handled by C1, the better

  • Guest
    Reply
    |
    Dec 19, 2023

    Add me to the support for this technology.  With both Leica and Nikon on board you are going to see more and more cameras with this tech built in.  It just makes sense for C1Pro to incorporate it plus it is an open standard.  There really isn't any reason why you shouldn't.

  • Guest
    Reply
    |
    Nov 15, 2023

    I guess as they managed to get Adobe on board to support it, it will most likley to become the standard to support. So, earlier C1 start to support it, the better for everyone.

     

  • BeO O
    Reply
    |
    Nov 3, 2023

    Hi david.

    If Capture One does nothing more than read and add to the tokens, exporting it with the export functions, then everything else takes care of itself.

    Absolutely.

    I think I agree with almost everything you've written.

    Personally, I don’t believe that a mark on an image is necessary if the underlying data is there. Trust in the photographer and if proof is necessary, it is in the image. So, news editors and award judges can confirm if needed.

    My firm believe is that for deep fakes *) which are published or distributed, it must be very clear that it is a deep fake, and very clear to me does not mean "buried in the metadata".

    *) "Deep fake" defined as an image which pretends a subject or subjects is doing something which they actually didn't, or events which did not happen or did not happen in the way as pretended. UNLESS it is art and clear to everyone that it is.

    If they are not published (yet) but sent to news editors or judges, and not distributed to parties where you cannot be sure they will stay private or labeled by the recipients if used, then metadata might be sufficient. Might.

    But think about the internet, what about the girl in toktiktak who finds an image of herself posted by someone else and she is doing something which she actually didn't do? Or a political blog with deep fake images. Only detectable if you download the image and look into the metadata. No.

    I don't care about sky replacement or removing litter, or even putting someone small into a big landscape, or someone you asked for permission, if the image is (clearly) art, but I do if it is news or of public interest and can be used to mislead or harm people on important matters.

    IMO, it is the responsibility of the image creator to decide and label whether it is a deep fake*), he is accountable, and he needs to have this choice. I cannot be a software vendor. If you replace John drinking a beer in a group of people with John drinking a beer in this group of people, you merge two images together, the images were captured a second apart, all people incl. John now have a smile on their face (which they didn't in the separate images), and John is not a politically exposed person, which is way more sensitive, is this a deep fake? It would, if you replaced John by Jim, or (f/m)ake John dump his beer on someone.

    The software can (probably) not decide if it should get a label "deep fake". It can record technical steps in the metadata though.

    But, deep fakes*) should scream to you what they are.

  • Guest
    Reply
    |
    Nov 3, 2023

    @Beo, I agree with some of what you said. However, regarding sky replacement and risk, that is exactly the point of content authenticity. To be transparent about what changes were made from the original capture and what generated the original capture. Replacing a sky for some photographic applications is not acceptable. For example, journalism and some photography awards. Yet one can easily argue that selling photography as art does not have the same constraint. So it is not whether sky replacement is right or wrong, it is proving whether or not the sky was replaced for the use of the photograph. At the same time, changing luminosity may not matter regardless of the use of the photograph, but knowing it was done and comparing to the original is simply authentication that the sky is darker in the representation, but still the same sky (or whatever luminosity was created). Artistically representing the same scene is different than artistically replacing things in the same scene. Until more recently this discussion was less important because it was too difficult to change the image that drastically without knowing.

    We have always had some level of content authenticity with film. The negative (or slide) was compared to the final image output and by looking at all the negatives from the roll, one could typically tell if the image was altered and in what fashion.

    Personally, I don’t believe that a mark on an image is necessary if the underlying data is there. Trust in the photographer and if proof is necessary, it is in the image. So, news editors and award judges can confirm if needed.

    The key is to adopt the technology in the software and at the pace that the ability to change photographs is progressing, the need to adopt the technology will increase with the same pace. If Capture One does nothing more than read and add to the tokens, exporting it with the export functions, then everything else takes care of itself.

    I think that is the ask here, but correct me if I am wrong.

  • BeO O
    Reply
    |
    Nov 3, 2023

    A very interesting topic.

    As pointed out already, Content Credentials and Generative AI transparency are not the same.

    But Content Credentials can support Generative AI transparency (but only too a certain extent)

    Content Credentials
    is fostered by Adobe, it seems to be a specific (read: proprietary) implementation to achieve the goals of the CAI (Content Authenticity Iniative) and C2PA (Coalition for Content Provenance and Authenticity), Adobe being one member, amongst many others.

    https://contentauthenticity.org/how-it-works
    https://contentauthenticity.org/our-members
    https://c2pa.org/

    Content Credentials is in beta in PS and also in LR.

    https://helpx.adobe.com/lightroom-cc/using/content-credentials-lightroom.html

     

    I think the initiave is valuable for content creators and consumers and think it would be a good idea that C1 supports it, should this become more main-stream and established technology.

     

    Generative AI transparency 

    There are no rules yet in effect, but there is a law in preparation in the EU which deals with the risks and chances of AI, one part is the transparency of content generated by AI.

    The EU regulation is under negotation with the member states now, nothing finalized yet, but it will have a global impact, imo.

    I am quite sure that replacing a sky and respective change in illumination of the scene by an AI module will not have to be made transparent, because it does not bear a noteworthy risk.

    From the current proposal of the regulation:

    Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. Disclosure shall mean labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. To label the content, users shall take into account the generally acknowledged state of the art and relevant harmonised standards and specifications.

    Amendment 486
    Proposal for a regulation
    Article 52 – paragraph 3 – subparagraph 1

    from: https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html

    Especially regarding the disclosure ("Disclosure ... is clearly visible for the recipient of that content.") I have some doubts that a new metadata structure like the Content Credentials is sufficient, because this is not clearly visible if you open a JPG file in a dumb application, let's say MS Paint.

    Probably a watermark is appropriate!?! But this is supported by C1 already.

    But then, I don't know the "generally acknowledged state of the art and relevant harmonised standards and specifications"

    No worries, for those who feel concerned, sky replacement will explicitely be protected from the need to have watermarks or alike :-) :

     Paragraph 3 shall not apply where the use of an AI system that generates or manipulates text, audio or visual content is authorized by law or if it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties. Where the content forms part of an evidently creative, satirical, artistic or fictional cinematographic, video games visuals and analogous work or programme, transparency obligations set out in paragraph 3 are limited to disclosing of the existence of such generated or manipulated content in an appropriate clear and visible manner that does not hamper the display of the work and disclosing the applicable copyrights, where relevant. It shall also not prevent law enforcement authorities from using AI systems intended to detect deep fakes and prevent, investigate and prosecute criminal offences linked with their use


    Anyway, there is currently no generative AI in C1.

     

  • Guest
    Reply
    |
    Nov 2, 2023

    I was the original author who started this. I sometimes "unsubscribe" from posts as I no longer wish to be notified via email.

    I do believe that the number you have indicated isn't a "vote" per se. I unfollowed my post and it now reads "6."

    I am hoping that the comments and how this discussion for Content Credentials needs to be included for future C1PRO development should be pinned or included for serious internal discussion on how to best implement going forward for the future.

  • Marcin Mrzygłocki
    Reply
    |
    Nov 2, 2023

    Raising attention a bit: currently 7 people follow this thread, yet it has 1 vote in total for now despite quite positive reaction - can you all check if you have put your vote in? Maybe the original post needs an update to reflect ongoing discussion?

  • Guest
    Reply
    |
    Nov 2, 2023

    I added the same request.   The idea is multi-fold and I have the Leica M11-P which is the first Leica camera to contain a chip that creates the content authentication directly in the DNG.  The idea is simple.

    Embedded in the DNG is a block chain style record that lists the camera serial number, the author (which is variable, typed into the camera menu) and some other information, which as I understand it, includes a view of the original image out of camera. 

    As the editing and exporting process continue, the edits are stored in the block chain.

    At any time, the image can be viewed through a content authentication viewer (free) and the information decoded and listed.

    It does not prevent the use of AI, but is intended to be a truthful list of changes such that if AI were used, it would be evident.  If not, it would also be evident.  If content were removed (i.e., people erased, clouds erased, etc.) that would be listed in steps to edit the image, but would also be evident when comparing the image out of camera embedding in the block chain to the resulting image submitted.

    Clearly journalists and news agencies want this first, but submission to photographic contests and grant awards would also like to see this, as would commericial entities receiving images.  

    This is authenticated knowledge, not prevention.

    But, it is also a must going forward for any mainstream photo editing software.  So, back to the request to please put this into the development timeline.  Without this ability in C1, some will be forced to use Adobe products for certain images that require that authenticity and can use C1 for those images that do not require it.  Granted it will start out small, but I believe this will be adopted sooner rather than later.

    Just my thoughts....

  • Adam Isler
    Reply
    |
    Nov 1, 2023

    I, too have recently learned about this initiative. It embeds read-only, encrypted history of content into the image metadata. This is good not just to ensure it’s not AI but to provide confidence and transparency into the image’s history. Leica is offering it on its new camera and there are rumours of Sony and Nikon following suit. It seems like it would be well worth adding into C1 the ability to pick it up from supported cameras and to have an option to add it at time of import for all files.

  • Brian Jordan
    Reply
    |
    Oct 27, 2023

    In short, it's complicated.  Hence the request for Capture One to explore. :)

  • Guest
    Reply
    |
    Oct 27, 2023

    Just a side note. The content credentials are normally stored in the DNG file at its creation in the camera. The ID number is linked to a photographer and can be verified. So, part of the question is, can C1 read that data in the DNG file? The second part is that editing software can write the edits, changes to the files to show what changes have been made. Using AI masking is not editing with AI, it is a mask. However, replacing a sky using AI or adding content to the image that is not already there would also show up, and would alter the image. So, the second question is, can C1 write the changes to the content credentials? There are specific protocols for using these credentials, but they are still new. Leica just released the M11-P that has content credentials that can be baked into every DNG. It is becoming more and more important.

  • Load older comments
  • +19
10 MERGED

CAI and C2PA support

Merged
CAI and C2PA initiatives are new standards for content creation attribution. It enables to see where the picture originates from (camera, AI-generated) and history of modifications. Unlike EXIF, it is hashed, encrypted and cannot be counterfeited,...
Jan Skýpala 11 months ago in Feature requests / Capture One Pro 6 Future consideration