Talk:Machine Learning/LiftWing/Usage

Rendered with Parsoid
From Wikitech
Latest comment: 5 months ago by Klausman in topic Script throws error when decoding ACCESSTOKEN

Script throws error when decoding ACCESSTOKEN

I tried out the Python script to decode my access token, but it failed with binascii.Error: Incorrect padding on the payload portion. (I was able to decode the token via the jwt Ruby gem.) Ragesoss (talk) 20:38, 31 August 2023 (UTC)Reply

Were you able to find an answer to this @Ragesoss? Fuzheado (talk) 22:21, 6 November 2023 (UTC)Reply
I never learned what was wrong with the Python script (and I removed it from the docs), but it was only a script for finding the access token, which can be easily done other ways (now documented) without code. Ragesoss (talk) 18:49, 7 November 2023 (UTC)Reply
Create https://phabricator.wikimedia.org/T350762 to fix the issue, thanks! Elukey (talk) 08:05, 8 November 2023 (UTC)Reply
I think the current state of the page (only mention that the token can be parsed with a JWT library, but no Python script) is fine. There rarely is need to decode the token in normal operation, and if a user needs help with it, we can always help. That way, we don't need to maintain a script that may break for various reasons. Klausman (talk) 11:02, 15 November 2023 (UTC)Reply

Requesting multiple revisions in a single request

The old ORES endpoint allowed us to add multiple pipe-separated revids into one request.

E.g. https://ores.wikimedia.org/v3/scores/hrwiki?models=reverted&revids=123|456|789

How can I do the same in LiftWing? (Please don't tell me it's not possible and I'll have to send 30 separate POST requests to get data on 30 edits).

Thank you! -Ivi104 (talk) 00:36, 26 September 2023 (UTC)Reply

Re-upping this question. I'm afraid from the documentation here, it requires multiple POST requests? What I'm concerned about is the lack of a REST API via a GET request. Is there anyone that could elaborate on these? Thanks. - Fuzheado (talk) 22:21, 6 November 2023 (UTC)Reply
I had this question as well a few months ago, and the ML team confirmed that they don't support multiple revids per request in LiftWing. The temporary `https://ores-legacy.wikimedia.org` service does support multiple revs (basically doing the spamming of the LiftWing server for you and combining results into one response), but I don't know how long that will be around. For my new application, we just went the lots-of-POST-requests route. Ragesoss (talk) 18:55, 7 November 2023 (UTC)Reply
Exactly yes, for the moment the API requires to use POSTs since all the data passed to the model server is via JSON payload. It is not 100% REST-compliant I am aware, but we use KServe's v1 API behind the scenes (the default standard for model serving on k8s, see https://github.com/kserve/kserve). Elukey (talk) 07:58, 8 November 2023 (UTC)Reply
At the moment we don't support batching, but we are working on it (see https://phabricator.wikimedia.org/T335480 for more info). We simplified a lot the ORES architecture, the previous API was able to do quick batching due to the use of heavy caching and extra systems to maintain, so we chose to strive for the best compromise, and some feature is still missing sadly. We are aware of the issue and we are trying to figure out how many people use it and how needed the feature it is. At the moment it seems used only by few folks, but hopefully with batching support we'll be able to offer a better alternative.
Would it be possible to migrate to Lift Wing before batching support (so with the need of multiple calls instead of one), or do you consider it a no-go? Thanks in advance :) Elukey (talk) 08:02, 8 November 2023 (UTC)Reply