In case anybody would ever have the need to pull ALL KiCad repositories and be sure to catch all, here's how:
$ curl -s https://api.github.com/orgs/KiCad/repos?per_page=1000 | grep "svn_url" | grep ".pretty" | cut -d " " -f 6 | sed -e "s/\"\(.*\)\".*/\1/g" | sort | while read line; do git clone $line; done
This clones all available .pretty git repositories.
I prefer this approach over the premade lists that circulate around as this is always up to date ...
Modify: Corrected the script as it was mangled by the blog. Added support for direct cloning
You can ask for 1000, but github only returns max of 100 results, you have to get multiple pages. As there are already over 100 repos, your one liner will miss some.
Ah, :palm: stupid me! Didn't check ...
So here's the updated one-liner:
$ lastpage=$(curl -s -I https://api.github.com/orgs/KiCad/repos | grep "Link:" | sed -e "s/.*page=\(.*\)>; rel=\"last\".*/\1/g"); for ((page=1;page<=$lastpage;page++)); do curl -s https://api.github.com/orgs/KiCad/repos?page=$page | grep "svn_url" | grep ".pretty" | cut -d " " -f 6 | sed -e "s/\"\(.*\)\".*/\1/g"; done | sort | while read line; do git clone --depth 1 $line; done
Or in a more readable form to put in a script file:
#!/bin/bash
lastpage=$(\
curl -s -I https://api.github.com/orgs/KiCad/repos \
| grep "Link:" \
| sed -e "s/.*page=\(.*\)>; rel=\"last\".*/\1/g"\
)
for ((page=1;page<=$lastpage;page++));
do
curl -s https://api.github.com/orgs/KiCad/repos?page=$page \
| grep "svn_url" \
| grep ".pretty" \
| cut -d " " -f 6 \
| sed -e "s/\"\(.*\)\".*/\1/g";
done | sort | while read line;
do
git clone --depth 1 $line;
done