And here’s a quick follow up on the wireguard topic:
I’ve moved a rather hacky tinc mesh vpn solution to wireguard, all set up through an ansible playbook. the topology is rather classic:
my workstation (laptop, changing network situation) connects as a ‘client’ to two wireguard ‘servers’ as vpn gateways which are publicly accessible bastion hosts, and who also are members of a private subnet to which they ought to give access. the specific nodes are cloud instances of each Hetzner cloud and Vultr cloud.
Hetzner recently started to provide private interfaces to their cloud instances, currently the private addresses seem to be given randomly when using the cli tool, but can be specified also via their website interface. Vultr has that service already longer, however, the private ip cannot be specified and is assigned at random.
the above used terms ‘client’ and ‘server’ are a bit anachronistic, as Wireguard does not make such a difference. the ‘servers’ merely do not get endpoints to their peers in their interface configuration, as they do not initiate connections.
Generally, when running a linux vpn gateway that connects two interfaces into different subnets (here wg0 is the wireguard interface, ens10 is the interface to the cloud provider’s virtual router and a self configured private subnet) one only needs to set /proc/sys/net/ipv4/ip_forward to 1 and /proc/sys/net/ipv6/conf/all/forwarding to 1 and be done with it. The nodes in the private subnet possibly need some way of receiving the necessary route back to that vpn gateway, via some routing protocol or static routing.
I was not able to set this up neither on Hetzner, nor on Vultr, and had to instead set up NAT on the gateway via iptables, as advised here in this tutorial, by the way a good reference on how to set up Wireguard: https://angristan.xyz/how-to-setup-vpn-server-wireguard-nat-ipv6/
My theory is that the virtual routers of the cloud providers are filtering this kind of traffic, as i can see the packets running through both the wireguard interface and the private subnet interface on the vpn gateway, but cannot see them at the final node’s interface. But i could be entirely wrong.
Part of the pipeline includes deploying code to a remote host via ssh. I generated a new key pair with
ssh-keygen. This created a key with openssh new format starting with:
-----BEGIN OPENSSH PRIVATE KEY-----
Apparently ansible does not like this format and on the “Gathering facts” step erred out with the message “Invalid key”. Googling that was not very successful, and I could not find that particular message in the ansible source, until i eventually found an unrelated closed issue on github which pointed me towards possible problems with key formats.
Eventually i generated a new key pair like so
ssh-keygen -m PEM, the
-m option setting the key format. The key then had the starting line
-----BEGIN RSA PRIVATE KEY-----
As far as i understand both keys are actually RSA keys, the latter’s PEM format being implied, whereas the former uses some new openssh format i was not previously aware of.
Earlier runs of
ssh-keygen did produce keys in the PEM format and as i am running Archlinux with OpenSSH_8.0p1, OpenSSL 1.1.1c 28 May 2019
One of the rolling updates to my system probably brought along this unexpected change.
Hope that helps somebody.
I’ve been trying to compile go programs on the gnubee which runs on a mips architecture.
Found this on github:
I have successfully cross compile go program into mips32 bin with below command, you may try this also.
GOARCH=mips32 is for ar71xx, change to
GOARCH=mips32le if it is ramips.
git clone https://github.com/gomini/go-mips32.git
sudo mkdir /opt/mipsgo
sudo cp -R * /opt/mipsgo
go build helloworld.go
I had to fix a
do-distro-upgrade from 16.04 to 18.04 due to a severed ssh connection, and no screen running (apparently earlier distro upgrades used screen to prevent this kind of problem)
the machine as a PCengine apu2, so no video. also, the root file system is sitting on a miniPCI ssd.
eventually, my thinkpad x230i, and this chroot cheatsheet helped: https://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install
mount the root filesystem device
$ mount -t ext4 /dev/<device> /mnt/
if there’s a different boot partition or anything else
$ mount -t ext2 /dev/<device> /mnt/boot
special devices mounts
$ mount -t proc none /mnt/proc $ mount -o bind /dev /mnt/dev $ mount -o bind /sys /mnt/sys
$ chroot /mnt /bin/bash $ source /etc/profile
In order to help troubleshoot in the future, i followed this advice to get a systemd service unit for a constant shell on the serial port, but mine runs for some reason on S0:
# systemctl enable serial-getty@ttyS0.service # systemctl start serial-getty@ttyS0.service
It won’t help if systemd does not start, but otherwise it is online really early.
Personally, i believe that the Litecoin and Ethereum projects have been so far able to generate a strong economy around them, however, projects like Nimiq definitely convince me in terms of usability and simplicity approach to the user.
I am considering Ubuntu 16.04 as base operating system.
The playbook does the following things:
- Install the necessary dependencies
ruby-devfor ruby 2.3, ruby gem package manage
unzipto handle the release file from github
- Create a specific user
nimiqand a program directory
- Download and unpack the release file from github under a version-specific directory below the program directory
- Create skypool client configuration file according to your demands and with your wallet address
- Create a systemd unit file, start the skypool client as a service and enable restart on reboot
- Create a status checker that uses the skypool api to check the worker’s online/offline status
- Create a crontab entry for the root user to run the status checker every ten minutes
The cron entry running every 10 minutes is a tradeoff on how brittle the online/offline check delay currently is experienced by me through the skypool site. Presumably skypool does not have a real heartbeat check towards the worker but assumes that the worker is online when it receives results from it, and subsequently assumes the worker to be offline if it does not (most pools in the cryptocurrency world work like that). That means in terms of perfect time period between checks, your mileage may vary.
The service runs currently under the user
nimiq, hence a non-privileged user of the system. However, the systemd daemon used is the one from
root. Hence only the root user can restart the nimiq service. For this reason, the cron entry is registered through the root user. If you want to be able to use the nimiq user to restart the nimiq service, you have to run a systemd daemon based on the nimiq user. I have successfully done that for another service playbook, and I might add this information in the future, if demand is voiced.
Find below the full gist as published on github. Full gist here.
Check which com port, mine was set to ‘com4’
Get a usb to serial converter, install drivers. Some of those converters seem to have timing problems, but i did not encounter that.
I once tried lowest baud rate 9600 and that produced some nice screen carnival, but nothing really legible.
prepping usb stick
Download usb prep tool ‘TinyCore USB Installer‘ and run it against on usb, i’ve used an 8GB stick, make sure it’s not the slowest
To try out you can now boot into TINYCORE. So put this into the APU2’s usb port and boot up having the serial nullmodem cable connected and the putty session open. Finished boot is indicated by an audible beep. This is good to check the serial connection which you should have established parallel to that.
If you want to keep the option of booting into TINYCORE open, backup the syslinux.config fom the USB’s root directory, as this one will be overwritten by the package content we are now downloading.
Download special ubuntu package from pcengines, unpack and move the three files into the usb root folder / or
Now plug in the usb into the apu2 and boot having the serial nullmodem cable connected and the putty session open. You will see the setup menu, similar to this screen shot:
The terminal setup process seems daunting at first, but it essentially is really analogues to the graphical ubuntu installer. I found my way around by basically following the Easy Path(tm) of most of the suggestions of the installer, going automatically step by step through the menu. On some of the sub menus i was able to make some educated changes as i knew a bit of more details and i had a good idea where i want to go with this system, but this might not apply to you.
The one exception was the network configuration. running the automatic network detection seems to have got the dhcpd info , but when i dropped into the busy box ash shell environment (one menu option
Execute a shell in the main hierarchy at the beginning of the installation process), i had to run dhclient directly on the interface again. Checking via ip addr i now could verify the indeed applied values, and could ping any public server. With
exit i dropped back into the installation menu. On a later second setup run this problem did not occur again.
I chose no automatic updates as i can see the cronjob using quite some resources. I’d rather manually schedule that for this particular system at them moment. Part of the minimum running service policy of mine for this instance.
I followed some tip regarding the bootloader installation, and it apparently solved my problem of an unfinished installation before. I lost the link, but it boiled down to manually enter the first partition of the setup target (pcie flash device in my case), so that was /dev/sdb1 as opposed to /dev/sdb. Again, this might be different for you.
Once that was done, and with a bit more patience i rebooted and eventually login via ssh could be established. I then halted the machine, physically unplugged the usb key and the console, and replugged power.
After about 45 sec ping answered and after than ssh came back online.
Thanks to https://www.bentasker.co.uk/documentation/linux/173-configuring-postfix-to-automatically-forward-mail-for-one-address-to-another
Assuming you’re running Postfix, it’s as simple as the steps below
First we make sure the virtual mappings file is enabled
vim /etc/postfix/main.cf # Scroll down until you find virtual_alias_maps # Make sure it reads something like virtual_alias_maps = hash:/etc/postfix/virtual # We also need to make sure the domain is enabled virtual_alias_domains=example.com
Save and exit, next we add the aliases to our mapping file
nano /etc/postfix/virtual # Forward mail for email@example.com to firstname.lastname@example.org email@example.com firstname.lastname@example.org
Simple! And if we want to send to two different addresses at once, we just specify them
email@example.com firstname.lastname@example.org email@example.com
Finally, we just need to create a hash (actually later versions of Postfix don’t require this)
It’s exactly the same principle as passing mail into a local user’s mailbox.
This is a quick step to generate self-signed certificate :
openssl genrsa 2048 > host.key openssl req -new -x509 -nodes -sha1 -days 3650 -key host.key > host.cert #[enter *.domain.com for the Common Name] openssl x509 -noout -fingerprint -text < host.cert > host.info cat host.cert host.key > host.pem chmod 400 host.key host.pem
Moving from php to rails
very interesting hands-on experience talk