December 19th, 2017
Wytze
The default rule seems to be to return from the chain. I don’t know if I am allowed to remove this entry so for now I’ll keep prepending my own rules.
iptables -I DOCKER-USER 1 -j DROP
iptables -I DOCKER-USER 1 -p tcp -m tcp -m mac --mac-source XX:XX:XX:XX:XX:XX -m state --state NEW -j RETURN -m comment --comment "Johns phone"
iptables -I DOCKER-USER 1 -p tcp -m tcp -s XXX.XXX.XXX.XXX -m state --state NEW -j RETURN -m comment --comment "Johns public ip"
iptables -I DOCKER-USER 1 -p tcp -m state --state RELATED,ESTABLISHED -j RETURN |
iptables -I DOCKER-USER 1 -j DROP
iptables -I DOCKER-USER 1 -p tcp -m tcp -m mac --mac-source XX:XX:XX:XX:XX:XX -m state --state NEW -j RETURN -m comment --comment "Johns phone"
iptables -I DOCKER-USER 1 -p tcp -m tcp -s XXX.XXX.XXX.XXX -m state --state NEW -j RETURN -m comment --comment "Johns public ip"
iptables -I DOCKER-USER 1 -p tcp -m state --state RELATED,ESTABLISHED -j RETURN
I stored these commands in /etc/network/docker-iptables.sh and made it executable.
Next determine what type of startup system your system is using.
In my case this shows systemd.
I then edited the docker.service file in /lib/systemd/system/docker.service
I added the following line behind the ExecStart.
ExecStartPost=/etc/network/docker-iptables.sh |
ExecStartPost=/etc/network/docker-iptables.sh
December 14th, 2017
Wytze
I have a SSH server that needed to bind to a specific IP but apparently it didn’t boot which was quite nasty as it is a headless machine.
The reason was that the network was not ready yet.
sshd: error: Bind to port 22 on x.y.y.z failed: Cannot assign requested address. |
sshd: error: Bind to port 22 on x.y.y.z failed: Cannot assign requested address.
Enabled systemd networkd wait.
systemctl enable systemd-networkd-wait-online.service |
systemctl enable systemd-networkd-wait-online.service
And added this to the /etc/network/interfaces just to be certain
auto eth0
iface eth0 inet dhcp
up service ssh start |
auto eth0
iface eth0 inet dhcp
up service ssh start
At least I can log in again now…
Managing docker through the cli can sometimes be a pain. Portainer is the management interface I use know to make life a little easier. You can run it on your local docker by issueing the following command:
docker run -d --privileged -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data -p 127.0.0.1:9000:9000 --restart always --name portainer portainer/portainer |
docker run -d --privileged -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data -p 127.0.0.1:9000:9000 --restart always --name portainer portainer/portainer
You might also want to enable the remote management api over tcp. edit /etc/default/docker and add the following:
DOCKER_OPTS='-H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock' |
DOCKER_OPTS='-H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock'
restart the docker daemon afterwards:
sudo service docker restart |
sudo service docker restart
To remove dangling volumes I use the following script:
#!/bin/sh
docker volume rm $(docker volume ls -qf dangling=true) |
#!/bin/sh
docker volume rm $(docker volume ls -qf dangling=true)
After installing the latest version of NetBeans I found that my old plugins were not migrated. I did the following to migrate them:
Run the new version of NetBeans and tell it to import my old stuff. Then close it again.
Copy the modules directory in ~/.netbeans/ to ~/.netbeans/
cp -avn ~/.netbeans/<old version>/modules ~/.netbeans/<new version> |
cp -avn ~/.netbeans/<old version>/modules ~/.netbeans/<new version>
Copy additional items from ~/.netbeans/config/Modules
cp -avn ~/.netbeans/<old version>/config/Modules/* ~/.netbeans/<new version>/config/Modules |
cp -avn ~/.netbeans/<old version>/config/Modules/* ~/.netbeans/<new version>/config/Modules
That did the trick for me though it might not always work depending on the version you are migrating from and to.
November 25th, 2016
Wytze
Small cheat sheet of git commands I frequently use.
Cloning a repository
Revert changes in working copy
Revert changes in a single file
Revert all local commits
Remove untracked files and directories
Show stash diff
git stash show -p <stash-id> |
git stash show -p <stash-id>
Clear all stashes
Show remotes
Switch branch
Show local unpushed commits
git log origin/master..HEAD |
git log origin/master..HEAD
Show local unpushed commit diff
git diff origin/master..HEAD |
git diff origin/master..HEAD
Undo commit
November 18th, 2016
Wytze
Just a small script to generate self signed certificates.
#!/bin/bash
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root" 1>&2
exit 1
fi
if [ "$#" -ne 1 ]; then
echo "No site name supplied e.g. jenkins"
exit
fi
openssl genrsa -des3 -passout pass:x -out $1.pass.key 2048
openssl rsa -passin pass:x -in $1.pass.key -out $1.key
rm $1.pass.key
openssl req -new -key $1.key -out $1.csr
openssl x509 -req -days 365 -in $1.csr -signkey $1.key -out $1.crt |
#!/bin/bash
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root" 1>&2
exit 1
fi
if [ "$#" -ne 1 ]; then
echo "No site name supplied e.g. jenkins"
exit
fi
openssl genrsa -des3 -passout pass:x -out $1.pass.key 2048
openssl rsa -passin pass:x -in $1.pass.key -out $1.key
rm $1.pass.key
openssl req -new -key $1.key -out $1.csr
openssl x509 -req -days 365 -in $1.csr -signkey $1.key -out $1.crt
November 15th, 2016
Wytze
Once you have set up your atom editor it is nice if you can store the list of installed plugins so you can easily reinstall or share with friends. Here are the commands to do so from the command line:
backup:
apm list --installed --bare > packages.txt |
apm list --installed --bare > packages.txt
install:
apm install --packages-file packages.txt |
apm install --packages-file packages.txt
🙂
Just two small classes I wrote to be able to do some easy validation when you need to have more contextual information when validating an entity. This will allow you to do that easily while preventing the rewriting of boilerplate code.
The annotation:
import java.lang.annotation.Documented;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
import static java.lang.annotation.ElementType.*;
import static java.lang.annotation.RetentionPolicy.*;
import java.lang.annotation.Repeatable;
import javax.validation.Constraint;
import javax.validation.Payload;
@Target({TYPE})
@Retention(RUNTIME)
@Repeatable(Predicate.List.class)
@Constraint(validatedBy = PredicateValidator.class)
@Documented
public @interface Predicate {
String message() default "{predicate.invalid}";
Class<?>[] groups() default {};
Class<? extends Payload>[] payload() default {};
String name();
@Target({TYPE})
@Retention(RUNTIME)
@Documented
public @interface List {
Predicate[] value();
}
} |
import java.lang.annotation.Documented;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
import static java.lang.annotation.ElementType.*;
import static java.lang.annotation.RetentionPolicy.*;
import java.lang.annotation.Repeatable;
import javax.validation.Constraint;
import javax.validation.Payload;
@Target({TYPE})
@Retention(RUNTIME)
@Repeatable(Predicate.List.class)
@Constraint(validatedBy = PredicateValidator.class)
@Documented
public @interface Predicate {
String message() default "{predicate.invalid}";
Class<?>[] groups() default {};
Class<? extends Payload>[] payload() default {};
String name();
@Target({TYPE})
@Retention(RUNTIME)
@Documented
public @interface List {
Predicate[] value();
}
}
The validator:
import java.util.HashMap;
import java.util.Map;
import javax.validation.ConstraintValidator;
import javax.validation.ConstraintValidatorContext;
public class PredicateValidator implements ConstraintValidator<Predicate, Object> {
private static final Map<String, java.util.function.Predicate> PREDICATES = new HashMap<>();
private java.util.function.Predicate predicate;
@Override
public void initialize(Predicate predicate) {
if (!PREDICATES.containsKey(predicate.name())) {
throw new IllegalArgumentException("No predicate with name " + predicate.name() + " found");
}
this.predicate = PREDICATES.get(predicate.name());
}
@Override
public boolean isValid(Object value, ConstraintValidatorContext context) {
return predicate.test(value);
}
public static void addPredicate(String name, java.util.function.Predicate p) {
PREDICATES.put(name, p);
}
} |
import java.util.HashMap;
import java.util.Map;
import javax.validation.ConstraintValidator;
import javax.validation.ConstraintValidatorContext;
public class PredicateValidator implements ConstraintValidator<Predicate, Object> {
private static final Map<String, java.util.function.Predicate> PREDICATES = new HashMap<>();
private java.util.function.Predicate predicate;
@Override
public void initialize(Predicate predicate) {
if (!PREDICATES.containsKey(predicate.name())) {
throw new IllegalArgumentException("No predicate with name " + predicate.name() + " found");
}
this.predicate = PREDICATES.get(predicate.name());
}
@Override
public boolean isValid(Object value, ConstraintValidatorContext context) {
return predicate.test(value);
}
public static void addPredicate(String name, java.util.function.Predicate p) {
PREDICATES.put(name, p);
}
}
And a small example:
import java.util.Objects;
@Predicate(name = "myPredicate")
@Predicate(name = "myPredicate2")
public class Pojo {
static {
PredicateValidator.addPredicate("myPredicate", obj -> {
Pojo p = (Pojo) obj;
System.out.println("Test 1");
return Objects.equals(p.getValueOne(), p.getValueTwo());
});
PredicateValidator.addPredicate("myPredicate2", obj -> {
Pojo p = (Pojo) obj;
System.out.println("Test 2");
return Objects.equals(p.getValueOne(), p.getValueTwo());
});
}
private String valueOne;
private String valueTwo;
public String getValueOne() {
return valueOne;
}
public void setValueOne(String valueOne) {
this.valueOne = valueOne;
}
public String getValueTwo() {
return valueTwo;
}
public void setValueTwo(String valueTwo) {
this.valueTwo = valueTwo;
}
} |
import java.util.Objects;
@Predicate(name = "myPredicate")
@Predicate(name = "myPredicate2")
public class Pojo {
static {
PredicateValidator.addPredicate("myPredicate", obj -> {
Pojo p = (Pojo) obj;
System.out.println("Test 1");
return Objects.equals(p.getValueOne(), p.getValueTwo());
});
PredicateValidator.addPredicate("myPredicate2", obj -> {
Pojo p = (Pojo) obj;
System.out.println("Test 2");
return Objects.equals(p.getValueOne(), p.getValueTwo());
});
}
private String valueOne;
private String valueTwo;
public String getValueOne() {
return valueOne;
}
public void setValueOne(String valueOne) {
this.valueOne = valueOne;
}
public String getValueTwo() {
return valueTwo;
}
public void setValueTwo(String valueTwo) {
this.valueTwo = valueTwo;
}
}
The trick is to export it to pkcs12 so that it can be imported by the java keytool.
Other ways of importing caused verification failures on the intermediate certificates for me.
openssl pkcs12 -export -out keystore.p12 -inkey certificate.pem -in certificate.pem
keytool -importkeystore -destkeystore keystore.jks -srcstoretype PKCS12 -srckeystore keystore.p12
# Change alias: keytool -changealias -alias 1 -keystore keystore.jks -keypass <pass> -destalias <destalias>
# Add intermediate certificates:
# openssl x509 -in root.crt -outform der -out root.der
# openssl x509 -in intermediate.crt -outform der -out intermediate.der
# keytool -import -trustcacerts -alias root -file root.der -keystore keystore.jks
# keytool -import -trustcacerts -alias root -file intermediate.der -keystore intermediate.jks |
openssl pkcs12 -export -out keystore.p12 -inkey certificate.pem -in certificate.pem
keytool -importkeystore -destkeystore keystore.jks -srcstoretype PKCS12 -srckeystore keystore.p12
# Change alias: keytool -changealias -alias 1 -keystore keystore.jks -keypass <pass> -destalias <destalias>
# Add intermediate certificates:
# openssl x509 -in root.crt -outform der -out root.der
# openssl x509 -in intermediate.crt -outform der -out intermediate.der
# keytool -import -trustcacerts -alias root -file root.der -keystore keystore.jks
# keytool -import -trustcacerts -alias root -file intermediate.der -keystore intermediate.jks
When used in Tomcat this would become something like the following:
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS"
keystoreFile="/path/to/keystore.jks" keystorePass="<keystorePass>" keyAlias="<alias_for_the_key>" /> |
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS"
keystoreFile="/path/to/keystore.jks" keystorePass="<keystorePass>" keyAlias="<alias_for_the_key>" />
I decided to give my Raspberry Pi a new life and installed the latest version of Raspbian.
I also ordered two similar usb sticks of the same size to make a raid 1 (mirrored) device with mdadm which I want to export with NFS.
Note that you should be knowing what you are doing since any of these steps might lead to data loss.
# Short session as root
sudo -i
# Determine where the usb sticks are.
fdisk -l
# Remove existing partitions and create the new Linux partitions. Combination of the following commands: p, d, n and w
fdisk /dev/<usb-device1>
fdisk /dev/<usb-device2>
# Install adm. When asked about installing it to the root os answer with 'none' since we will keep booting from the sd card.
apt-get install mdadm
mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/<usb1-partition1> /dev/<usb2-partition1>
# Check if everything is ok.
mdadm --detail /dev/md0
# Write stuff to the mdadm config
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
# Create the fs (-m 0 = no reserved blocks)
mkfs.ext4 -m 0 /dev/md0
mkdir /mnt/raid
mount /dev/md0 /mnt/raid
# I copied the directories I wanted to move over to the raid device (/var, /tmp, /opt, /root)
# copy var without the symlinks use the same command for the other directories.
find /var -depth -type f -o -type d | cpio -pamVd /mnt/raid
# Determine the uuid of the raid device to be used in fstab
blkid /dev/md0 |
# Short session as root
sudo -i
# Determine where the usb sticks are.
fdisk -l
# Remove existing partitions and create the new Linux partitions. Combination of the following commands: p, d, n and w
fdisk /dev/<usb-device1>
fdisk /dev/<usb-device2>
# Install adm. When asked about installing it to the root os answer with 'none' since we will keep booting from the sd card.
apt-get install mdadm
mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/<usb1-partition1> /dev/<usb2-partition1>
# Check if everything is ok.
mdadm --detail /dev/md0
# Write stuff to the mdadm config
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
# Create the fs (-m 0 = no reserved blocks)
mkfs.ext4 -m 0 /dev/md0
mkdir /mnt/raid
mount /dev/md0 /mnt/raid
# I copied the directories I wanted to move over to the raid device (/var, /tmp, /opt, /root)
# copy var without the symlinks use the same command for the other directories.
find /var -depth -type f -o -type d | cpio -pamVd /mnt/raid
# Determine the uuid of the raid device to be used in fstab
blkid /dev/md0
Edit fstab to mount everything from the raid device.
UUID="<your_raid_uuid>" /mnt/raid ext4 defaults,noatime 0 2
/mnt/raid/var /var none defaults,bind 0 0
/mnt/raid/tmp /tmp none defaults,bind 0 0
/mnt/raid/root /root none defaults,bind 0 0
/mnt/raid/home /home none defaults,bind 0 0
/mnt/raid/opt /opt none defaults,bind 0 0 |
UUID="<your_raid_uuid>" /mnt/raid ext4 defaults,noatime 0 2
/mnt/raid/var /var none defaults,bind 0 0
/mnt/raid/tmp /tmp none defaults,bind 0 0
/mnt/raid/root /root none defaults,bind 0 0
/mnt/raid/home /home none defaults,bind 0 0
/mnt/raid/opt /opt none defaults,bind 0 0
Next I installed nfs and created a dir for the nfs shares.
apt-get install nfs-kernel-server
# Don't forget to start rpcbind. Otherwise you will have strange problems connecting to your nfs share from other machines. (Connection timed out most probably)
service start rpcbind |
apt-get install nfs-kernel-server
# Don't forget to start rpcbind. Otherwise you will have strange problems connecting to your nfs share from other machines. (Connection timed out most probably)
service start rpcbind
mkdir /mnt/raid/share
mkdir /export
cd /export
ln -s /mnt/raid/share |
mkdir /mnt/raid/share
mkdir /export
cd /export
ln -s /mnt/raid/share
Edited /etc/exports
/export/share 192.168.1.0/24(rw,sync,no_subtree_check) |
/export/share 192.168.1.0/24(rw,sync,no_subtree_check)
# export everything. Restart nfs to be certain our changes made it.
exportfs -r
service nfs-kernel-server restart |
# export everything. Restart nfs to be certain our changes made it.
exportfs -r
service nfs-kernel-server restart
Well that’s it for now. I will be testing this to see how it holds up on my Raspberry.