Ref

Installing Python packages (Offline mode) - IBM Documentation

Download packages from pip

$ pip download -d path/to/dir PACKAGE_NAME=VERSION

Create requirements

$ vi pre_req.txt
numpy=-1.2.1
keras=2.0.2

Install local packages from pip

$ pip install --no-deps --no-index --find-links=/path/to/dir -r pre_req.txt

Remove local installed pacakges

$ pip uninstall -r pre_req.txt

'Python' 카테고리의 다른 글

pipsi - pip script installer  (1) 2019.06.04
SSH with python3  (0) 2019.01.25
flask: download file from directory  (0) 2019.01.02
Flask on apache via mod_wsgi  (0) 2018.12.06
Python 2.6 to 2.7 (or 3.x)  (0) 2018.11.23

# network session check
ss -ant | awk '{print $1}' | grep -v '[a-z]' | sort | uniq -c

# resource limit check
ulimit -a
/etc/security/limits.conf
sudo prlimit --nofile --output RESOURCE,SOFT,HARD --pid $PID

'System Engineering > Linux' 카테고리의 다른 글

KVM virtual machine setting  (0) 2020.06.26
Nginx: How to purge the proxy cache  (0) 2020.05.25
Ubuntu apt-get upgrade  (0) 2019.12.13
Logstash - filebeat SSL settings  (0) 2019.11.26
Buffers and cache in memory (Linux)  (0) 2019.11.11

Ref

https://help.ubuntu.com/community/HighlyAvailableNFS

 

HighlyAvailableNFS - Community Help Wiki

Introduction In this tutorial we will set up a highly available server providing NFS services to clients. Should a server become unavailable, services provided by our cluster will continue to be available to users. Our highly available system will resemble

help.ubuntu.com

 

Add hosts and install packages on each nodes

# vi /etc/hosts

[IPADDR1]    nfs1
[IPADDR2]    nfs2

# sudo apt-get install ntp drbd8-utils heartbeat

Create drbd config named 'nfs' on each nodes

# vi /etc/drbd.d/nfs.res
resource nfs {
        protocol C;

        handlers {
                pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
                pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
                local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
                outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";
        }

        startup {
                degr-wfc-timeout 120;
        }

        disk {
                on-io-error detach;
                no-disk-flushes ;
                no-disk-barrier;
                c-plan-ahead 0;
                c-fill-target 1M;
                c-min-rate 180M;
                c-max-rate 720M;
        }

        net {
                cram-hmac-alg sha1;
                shared-secret "PASSWORD";
                after-sb-0pri disconnect;
                after-sb-1pri disconnect;
                after-sb-2pri disconnect;
                rr-conflict disconnect;
                max-buffers 40k;
                sndbuf-size 0;
                rcvbuf-size 0;
        }

        syncer {
                rate 210M;
                verify-alg sha1;
                al-extents 3389;
        }

        on nfs1 {
                device  /dev/drbd0;
                disk    /dev/sdb1;
                address IP:7788;
                meta-disk internal;
        }

        on nfs2 {
                device  /dev/drbd0;
                disk    /dev/sdb1;
                address IP:7788;
                meta-disk internal;
        }
}

Setup DRBD on each nodes

# sudo chgrp haclient /sbin/drbdsetup
# sudo chmod o-x /sbin/drbdsetup
# sudo chmod u+s /sbin/drbdsetup
# sudo chgrp haclient /sbin/drbdmeta
# sudo chmod o-x /sbin/drbdmeta
# sudo chmod u+s /sbin/drbdmeta

# sudo drbdadm create-md nfs

Master node

# sudo drbdadm -- --overwrite-data-of-peer primary nfs

Check Primary/Secondary state and sync progress

# cat /proc/drbd

After sync completed on both node, configure NFS

Test data sync

# sudo mkfs.ext4 /dev/drdb0
# mkdir -p /srv/data
# sudo apt-get install nfs-kernel-server

Master node again

# sudo mount /dev/drbd0 /srv/data
# sudo mv /var/lib/nfs/ /srv/data/
# sudo ln -s /srv/data/nfs/ /var/lib/nfs
# sudo mv /etc/exports /srv/data
# sudo ln -s /srv/data/exports /etc/exports

Slave node

# sudo rm -rf /var/lib/nfs
# sudo ln -s /srv/data/nfs/ /var/lib/nfs
# sudo rm /etc/exports
# sudo ln -s /srv/data/exports /etc/exports

Configure HEARTBEAT on both node

# vi /etc/heartbeat/ha.cf
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
bcast enp1s0f0np0
node nfs1
node nfs2


# sudo vi /etc/heartbeat/authkeys

auth 3
3 md5 PASSWORD



# sudo chmod 600 /etc/heartbeat/authkeys
# vi /etc/heartbeat/haresources

nfs1 IPaddr::NFS_MASTER_IP/17/IF_NAME drbddisk::nfs Filesystem::/dev/drbd0::/srv/data::ext4 nfs-kernel-server



# sudo systemctl enable heartbeat

# sudo systemctl enable drbd

# sudo reboot now

Copy lxd container from local lxd host to remote lxd server

Setup remote LXD on local

lxc remote add REMOTE_NAME REMOTE_IP
lxc config set core.https_address REMOTE_IP:8443
lxc config set core.trust_password PASSWORD_STRING

Copy container

You should stop the container that you want to copy

lxc copy CONTAINER_NAME_ON_LOCAL REMOTE_NAME:CONTAINER_REMOTE_NAME

 

Ref

https://github.com/geofront-auth/geofront

 

geofront-auth/geofront

Simple SSH key management service. Contribute to geofront-auth/geofront development by creating an account on GitHub.

github.com

Colonize automation for geofront server

# colonize.py
import os
import json

# create public key
create_pub_key = os.popen("ssh-keygen -y -f /var/lib/geofront/id_rsa > /var/lib/geofront/id_rsa.pub").read()

# load server list
with open("/opt/geofront/server/server.json", 'r') as f:
        ds = json.load(f)

hosts = list()
for k, v in ds.items():
        hosts.append(k)

# get password from env variable
pw = os.environ['PASSWORD']

# start coping to remote authorized_key
for host in hosts:
        remote = ds[host]["account"]+"@"+ds[host]["ip"]
        try:
                cmd = "sh /ssh-copy-id.sh " + remote + " " + pw
                print("Executing ssh-copy-id on: " + host)
                exec_cmd = os.popen(cmd).read()
        except:
                e = os.popen("echo "+remote+" >> /failed_ssh_host.log").read()
                print("Exception error: check /failed_ssh_host.log")

date = os.popen("date").read()

 

# ssh-copy-id.sh
#!/bin/bash
remote=$1
pw=$2

# spawn & expect: enter for command line interaction
#spawn ssh-copy-id -o StrictHostKeyChecking=no -i /var/lib/geofront/id_rsa.pub $remote
expect << EOF
spawn ssh-copy-id -i /var/lib/geofront/id_rsa.pub $remote
expect {
    "(yes/no)?" { send "yes\n"; exp_continue }
    "password:" { send "$pw\n"; exp_continue }
    eof
}

+ Recent posts