ACAP has released some documents outlining the use cases they will be testing and some proposed changes to the Robots Exclusion Protocol (REP) - both robots.txt and META tags. There are some very practical proposals here to improve search engine indexing. However, the only search engine publicly participating in the project is http://www.exalead.com/ (which according to Alexa attracted 0.0043% of global internet visits over the last three months). The main docs are “ACAP pilot Summary use cases being tested”, “ACAP Technical Framework - Robots Exclusion Protocol - strawman proposals Part 1”, “ACAP Technical Framework - Robots Exclusion Protocol - strawman proposals Part 2”, “ACAP Technical Framework - Usage Definitions - draft for pilot testing”.
What would cause other search engines to recognize the ACAP protocols rather than ignore them? A lot of publishers implementing this and requiring search engines to recognize it to index content could put pressure on the engines. Maybe.