How can I get the url of the uploaded video programattically?
After I upload a video, it prints the video url to STDOUT, but there are also many more information there. I need the url, and only the url, to do things with it.
My current guess is, that I use regexes to filter the url from stdout, and then maybe check whether or not the url is valid. But I'm wondering if there is a better way to do it.
(My secret wish is, that there is some flag, that tells the script to only print the url to stdout, and to print everything else to stderr. But I think this may be asking for too much.)
How can I get the url of the uploaded video programattically?
After I upload a video, it prints the video url to STDOUT, but there are also many more information there. I need the url, and only the url, to do things with it.
My current guess is, that I use regexes to filter the url from stdout, and then maybe check whether or not the url is valid. But I'm wondering if there is a better way to do it.
(My secret wish is, that there is some flag, that tells the script to only print the url to stdout, and to print everything else to stderr. But I think this may be asking for too much.)
In Peertube: Watch it at %s/videos/watch/%s. the second %s represent the uuid of the video, you can change the text to something more easy to parse, for example:
(Quick and dirty, probably perfectible, but it should do the workaround ^^')
Hoe it helps!
Indeed, we have never worked on a "batch" version of the script, it could be useful to automatize even more!
I'll get this point as a feature I guess, thanks for the idea.
Meanwhile you can modify manually arount https://git.lecygnenoir.info/LecygneNoir/prismedia/src/branch/develop/lib/pt_upload.py#L206 (line 206 in `lib/pt_upload.py`, the template) to change the display according to your need.
In `Peertube: Watch it at %s/videos/watch/%s.` the second %s represent the uuid of the video, you can change the text to something more easy to parse, for example:
```
template = 'UUID %s.'
logging.info(template % (uuid))
```
will allow you to parse the output with:
`./prismedia --youroptions | grep UUID | awk '{print $2}'`
(Quick and dirty, probably perfectible, but it should do the workaround ^^')
Hoe it helps!
@wotaniii I'm thinking of an option like --batch=outputFileOrPipe
And then when an upload is done and we have the URL we wrute to the file: Youtube youtu.be/... and when peertube finished Peertube peertu.be/....
The syntax is not final but I think just sending the names like this may enable to pipe the output (unlike a JSON format). I'm not 100% certain we can pipe it easiely but we might.
Otherwise we will need to send all the logs to stderr and the urls to stdout. The --batch option would remove everything that is not a server URL pair.
(By the way, I don't really like the "batch" name.)
@wotaniii I'm thinking of an option like `--batch=outputFileOrPipe`
And then when an upload is done and we have the URL we wrute to the file: `Youtube youtu.be/...` and when peertube finished `Peertube peertu.be/...`.
The syntax is not final but I think just sending the names like this may enable to pipe the output (unlike a JSON format). I'm not 100% certain we can pipe it easiely but we might.
Otherwise we will need to send all the logs to stderr and the urls to stdout. The `--batch` option would remove everything that is not a `server URL` pair.
(By the way, I don't really like the "batch" name.)
I suggest to have the parameter similar to other programs.
For instance if you want to use wget to pipe to stdout you do this
wget -q -O - http://www.example.com/file.html
-q/--quite turns off logging
-O/--Output redirects the output
- stands for stdout
So applying this scheme to prismedia would look like this
./prismedia --youroptions -q -O -
Maybe -O/--Output is still not the right name. Maybe -u / --print-url or -u / --output-url would work better.
I think even with -q, errors, that prevent the output from being generated, should still be printed to stderr.
I suggest to have the parameter similar to other programs.
For instance if you want to use wget to pipe to stdout you do this
wget -q -O - http://www.example.com/file.html
`-q/--quite` turns off logging
`-O/--Output` redirects the output
`-` stands for stdout
So applying this scheme to prismedia would look like this
./prismedia --youroptions -q -O -
Maybe `-O/--Output` is still not the right name. Maybe `-u / --print-url` or `-u / --output-url` would work better.
I think even with `-q`, errors, that prevent the output from being generated, should still be printed to stderr.
in case anyone is wondering. I'm now using a small wrapper for prismedia (link).
I went for the easy option by just looking for "Peertube: Watch it at " in the output.
Don't worry about my dirty hack when making changes. If something breaks on my end, it's for me to fix it.
in case anyone is wondering. I'm now using a small wrapper for prismedia ([link](https://gitlab.com/juergens/stabbot/blob/master/src/peertube.py)).
I went for the easy option by just looking for "Peertube: Watch it at " in the output.
Don't worry about my dirty hack when making changes. If something breaks on my end, it's for me to fix it.
Thinking about it again, it would be really helpful to have each "platform" return a common structure when the upload ends. A structure/object containing at least the following values:
errors encountered
URL of the video
publish time
platform name (youtube or peertube for now)
privacy status maybe
Then on upload when both platform ends we can show an unified summary with error if any. With the auto-upload functionality we would be able to use the same structure to fill the "online state" file.
Thinking about it again, it would be really helpful to have each "platform" return a common structure when the upload ends. A structure/object containing at least the following values:
* errors encountered
* URL of the video
* publish time
* platform name (youtube or peertube for now)
* privacy status maybe
Then on upload when both platform ends we can show an unified summary with error if any. With the auto-upload functionality we would be able to use the same structure to fill the "online state" file.
At least I have some time to work on it! (long time that I wan to improve and unify the logging system ^^)
Objective is to rewrite the logging by using exclusively "logging" library, and once done, adding option for quiet or stderr redirect should be easier.
I also need to find how to remove the intrusive "Youtube API" logging used in their library, if possible.
Hello!
At least I have some time to work on it! (long time that I wan to improve and unify the logging system ^^)
Objective is to rewrite the logging by using exclusively "logging" library, and once done, adding option for quiet or stderr redirect should be easier.
I also need to find how to remove the intrusive "Youtube API" logging used in their library, if possible.
Which is already usable, but I find no easy way to tell which url is Peertube, and which is Youtube :-/
If I add more output (like "Peertube:, Youtube:"), it'll need more parsing which is a shame regarding the objective of --print-url, but without it I do not see how to tell the user clearly which url is which.
For you:
Does printing only the url is enough (the user should recognize their own peertube domain themselves)
Adding a specific keyword (eg: peertube: youtube:) is necessary
Another idea?
Thanks!
Hello @Zykino, @wotaniii
If you are around here, I have a question regarding the use of `-q` with `--print-url`
Currently I have something like that:
```
$ prismedia --file=prismedia/prismedia/samples/yourvideo.mp4 -q --print-url
https://domain.peertube.tld/videos/watch/119a52d9-f897-4398-98dd-434bf34b8aa5
https://youtu.be/UUID
```
Which is already usable, but I find no easy way to tell which url is Peertube, and which is Youtube :-/
If I add more output (like "Peertube:, Youtube:"), it'll need more parsing which is a shame regarding the objective of `--print-url`, but without it I do not see how to tell the user clearly which url is which.
For you:
1. Does printing only the url is enough (the user should recognize their own peertube domain themselves)
2. Adding a specific keyword (eg: peertube: youtube:) is necessary
3. Another idea?
Thanks!
I feel like you need to include the platform. Otherwise we will need to always have the same order (upload only on Youtube? -> still need an empty line for program wanting Youtube on line 2.
Doing a split on the space or colon character does not seems like much on any language (except antic C).
We may even add a --json option to be 100% understandable by other programs while the current behavior is good enough to be scriptable with awk or others if needed.
Just as a note: I reviewed your code and I think the second commit should be different. In the commit each platform has the task to print the final address (which means we don’t have a resume at the end, the info of the first platform is still mixed in the middle of the upload of the second platform). If we want to use this functionality to help do #27, it will be easier to have the infos returned in upload.py and let it print the resume and/or create the auto file (by calling a function in auto.py with this info, including last error).
Sorry if this is dense/not easily comprehensible.
I feel like you need to include the platform. Otherwise we will need to always have the same order (upload only on Youtube? -> still need an empty line for program wanting Youtube on line 2.
Doing a split on the space or colon character does not seems like much on any language (except antic C).
We may even add a `--json` option to be 100% understandable by other programs while the current behavior is good enough to be scriptable with `awk` or others if needed.
Just as a note: I reviewed your code and I think the second commit should be different. In the commit each platform has the task to print the final address (which means we don’t have a resume at the end, the info of the first platform is still mixed in the middle of the upload of the second platform). If we want to use this functionality to help do #27, it will be easier to have the infos returned in `upload.py` and let it print the resume and/or create the auto file (by calling a function in `auto.py` with this info, including last error).
Sorry if this is dense/not easily comprehensible.
I see you have the exact opposite point of view ^^
I think I may try a compromise, somethink like printing the platform on stderr and the url on stdout, so parsing will be possible for peaople needed it, but if you use only one platform (and so one url), you just use stdout. I'll see if this is possible ant not too confusing.
@zykino regarding #27 I have preference for logging the nearest possible from the code to be sure to not miss anything if something went wrong after, but the logging facility in python is pretty flexible and I think it would be possible to create an object along the way to use after, decorrelated from stdout/stderr (which would be useful even in auto mode for debug)
Thanks to both of you for your quick answers!
I see you have the exact opposite point of view ^^
I think I may try a compromise, somethink like printing the platform on stderr and the url on stdout, so parsing will be possible for peaople needed it, but if you use only one platform (and so one url), you just use stdout. I'll see if this is possible ant not too confusing.
@zykino regarding #27 I have preference for logging the nearest possible from the code to be sure to not miss anything if something went wrong after, but the logging facility in python is pretty flexible and I think it would be possible to create an object along the way to use after, decorrelated from stdout/stderr (which would be useful even in auto mode for debug)
Otherwise we will need to always have the same order (upload only on Youtube? -> still need an empty line for program wanting Youtube on line 2.
So the first line is always peertube, the second line is always youtube?
I like that too. But it's unintuitive behavior. So it must be very well documented.
> Otherwise we will need to always have the same order (upload only on Youtube? -> still need an empty line for program wanting Youtube on line 2.
So the first line is always peertube, the second line is always youtube?
I like that too. But it's unintuitive behavior. So it must be very well documented.
Sorry for the delay. The computer I used to work broke, it takes some times to repair it 😅
Now it works again, I hope I'll be able to finish this feature!
The possibility to play with stderr/stdout (eg, url in stdout, platform in stderr) seems too much complicated, explaining it in some --help lines is not possible.
On the other hand, specifying nothing on the command line is akso pretty disturbing for new user, and if we add new platforms later it would be complicated to manager, as prismedia aims to be flexible after all - streamable is an option that could be added @wotaniii \o/
At least my computer outage give me times to think! I will choose another solution:
-q --url-only print only url for those that know what they do
-q --batch allow batch process and print something like platform: url for an easier parsing
Now I only need time to finish that ^^
See you soon for the next release!
Hello,
Thank you both for your answers!
Sorry for the delay. The computer I used to work broke, it takes some times to repair it :sweat_smile:
Now it works again, I hope I'll be able to finish this feature!
The possibility to play with stderr/stdout (eg, url in stdout, platform in stderr) seems too much complicated, explaining it in some --help lines is not possible.
On the other hand, specifying nothing on the command line is akso pretty disturbing for new user, and if we add new platforms later it would be complicated to manager, as prismedia aims to be flexible after all - streamable is an option that could be added @wotaniii \o/
At least my computer outage give me times to think! I will choose another solution:
-q --url-only print only url for those that know what they do
-q --batch allow batch process and print something like `platform: url` for an easier parsing
Now I only need time to finish that ^^
See you soon for the next release!
You are now able to use the --url-only option when uploading, which will shut all logs (except critical errors preventing upload) and print only the final url after upload.
Hope you'll enjoy it!
Do not hesistate to reopen if needed.
Regards,
Hello,
The new [v0.10.0](https://git.lecygnenoir.info/LecygneNoir/prismedia/releases/tag/v0.10.0) should fix this issue.
You are now able to use the `--url-only` option when uploading, which will shut all logs (except critical errors preventing upload) and print only the final url after upload.
Hope you'll enjoy it!
Do not hesistate to reopen if needed.
Regards,
How can I get the url of the uploaded video programattically?
After I upload a video, it prints the video url to STDOUT, but there are also many more information there. I need the url, and only the url, to do things with it.
My current guess is, that I use regexes to filter the url from stdout, and then maybe check whether or not the url is valid. But I'm wondering if there is a better way to do it.
(My secret wish is, that there is some flag, that tells the script to only print the url to stdout, and to print everything else to stderr. But I think this may be asking for too much.)
Indeed, we have never worked on a "batch" version of the script, it could be useful to automatize even more!
I'll get this point as a feature I guess, thanks for the idea.
Meanwhile you can modify manually arount https://git.lecygnenoir.info/LecygneNoir/prismedia/src/branch/develop/lib/pt_upload.py#L206 (line 206 in
lib/pt_upload.py
, the template) to change the display according to your need.In
Peertube: Watch it at %s/videos/watch/%s.
the second %s represent the uuid of the video, you can change the text to something more easy to parse, for example:will allow you to parse the output with:
./prismedia --youroptions | grep UUID | awk '{print $2}'
(Quick and dirty, probably perfectible, but it should do the workaround ^^')
Hoe it helps!
[Question] get the url of the uploaded video programatticallyto [Feature] Add an option for batch upload 5 years ago@wotaniii I'm thinking of an option like
--batch=outputFileOrPipe
And then when an upload is done and we have the URL we wrute to the file:
Youtube youtu.be/...
and when peertube finishedPeertube peertu.be/...
.The syntax is not final but I think just sending the names like this may enable to pipe the output (unlike a JSON format). I'm not 100% certain we can pipe it easiely but we might.
Otherwise we will need to send all the logs to stderr and the urls to stdout. The
--batch
option would remove everything that is not aserver URL
pair.(By the way, I don't really like the "batch" name.)
I suggest to have the parameter similar to other programs.
For instance if you want to use wget to pipe to stdout you do this
-q/--quite
turns off logging-O/--Output
redirects the output-
stands for stdoutSo applying this scheme to prismedia would look like this
Maybe
-O/--Output
is still not the right name. Maybe-u / --print-url
or-u / --output-url
would work better.I think even with
-q
, errors, that prevent the output from being generated, should still be printed to stderr.in case anyone is wondering. I'm now using a small wrapper for prismedia (link).
I went for the easy option by just looking for "Peertube: Watch it at " in the output.
Don't worry about my dirty hack when making changes. If something breaks on my end, it's for me to fix it.
Thinking about it again, it would be really helpful to have each "platform" return a common structure when the upload ends. A structure/object containing at least the following values:
Then on upload when both platform ends we can show an unified summary with error if any. With the auto-upload functionality we would be able to use the same structure to fill the "online state" file.
[Feature] Add an option for batch uploadto [Feature] Improve logs output with standard cli options 3 years agoHello!
At least I have some time to work on it! (long time that I wan to improve and unify the logging system ^^)
Objective is to rewrite the logging by using exclusively "logging" library, and once done, adding option for quiet or stderr redirect should be easier.
I also need to find how to remove the intrusive "Youtube API" logging used in their library, if possible.
Hello @Zykino, @wotaniii
If you are around here, I have a question regarding the use of
-q
with--print-url
Currently I have something like that:
Which is already usable, but I find no easy way to tell which url is Peertube, and which is Youtube :-/
If I add more output (like "Peertube:, Youtube:"), it'll need more parsing which is a shame regarding the objective of
--print-url
, but without it I do not see how to tell the user clearly which url is which.For you:
Thanks!
This would be perfect, I think
I feel like you need to include the platform. Otherwise we will need to always have the same order (upload only on Youtube? -> still need an empty line for program wanting Youtube on line 2.
Doing a split on the space or colon character does not seems like much on any language (except antic C).
We may even add a
--json
option to be 100% understandable by other programs while the current behavior is good enough to be scriptable withawk
or others if needed.Just as a note: I reviewed your code and I think the second commit should be different. In the commit each platform has the task to print the final address (which means we don’t have a resume at the end, the info of the first platform is still mixed in the middle of the upload of the second platform). If we want to use this functionality to help do #27, it will be easier to have the infos returned in
upload.py
and let it print the resume and/or create the auto file (by calling a function inauto.py
with this info, including last error).Sorry if this is dense/not easily comprehensible.
Thanks to both of you for your quick answers!
I see you have the exact opposite point of view ^^
I think I may try a compromise, somethink like printing the platform on stderr and the url on stdout, so parsing will be possible for peaople needed it, but if you use only one platform (and so one url), you just use stdout. I'll see if this is possible ant not too confusing.
@zykino regarding #27 I have preference for logging the nearest possible from the code to be sure to not miss anything if something went wrong after, but the logging facility in python is pretty flexible and I think it would be possible to create an object along the way to use after, decorrelated from stdout/stderr (which would be useful even in auto mode for debug)
So the first line is always peertube, the second line is always youtube?
I like that too. But it's unintuitive behavior. So it must be very well documented.
Hello,
Thank you both for your answers!
Sorry for the delay. The computer I used to work broke, it takes some times to repair it 😅
Now it works again, I hope I'll be able to finish this feature!
The possibility to play with stderr/stdout (eg, url in stdout, platform in stderr) seems too much complicated, explaining it in some --help lines is not possible.
On the other hand, specifying nothing on the command line is akso pretty disturbing for new user, and if we add new platforms later it would be complicated to manager, as prismedia aims to be flexible after all - streamable is an option that could be added @wotaniii \o/
At least my computer outage give me times to think! I will choose another solution:
-q --url-only print only url for those that know what they do
-q --batch allow batch process and print something like
platform: url
for an easier parsingNow I only need time to finish that ^^
See you soon for the next release!
Hello,
The new v0.10.0 should fix this issue.
You are now able to use the
--url-only
option when uploading, which will shut all logs (except critical errors preventing upload) and print only the final url after upload.Hope you'll enjoy it!
Do not hesistate to reopen if needed.
Regards,