How to Set Up Proxmox

From Linux Delta
Jump to: navigation, search

Prerequisites[edit]

  • Static IP address
  • DNS hostname
  • Proxmox ISO downloaded from Proxmox VE Downloads and burned to a DVD or a USB flash drive.


Installation[edit]

  1. Boot the machine to the Proxmox install disk.
  2. On the boot screen, choose Install Proxmox VE using the arrow keys and press Enter.
    <img src="pve01.png" />
  3. Click "I Agree to accept the EULA".
    <img src="pve02.png" />
  4. Select the desired drive where Proxmox will be installed. Click Options.
    <img src="pve03.png" />
  5. Enter 2 for the swapsize, 50 for the maxroot, and 100 for the minfree. These values may vary depending on the size of the install drive. Click Ok. Click Next.
    <img src="pve04.png" />
  6. Select the appropriate timezone. Click Next.
    <img src="pve05.png" />
  7. Enter a root user password twice and a valid email address (the email address will be used for alerts that the Proxmox server sends). Click Next.
    <img src="pve06.png" />
  8. Choose the management interface from the drop-down and enter the desired hostname, IP address, netmask, gateway, and DNS server. Click Next.
    <img src="pve07.png" />
  9. Review the configured settings and click Next.
    <img src="pve08.png" />
  10. When the install is finished, click Reboot.
    <img src="pve09.png" />


Basic Utilities[edit]

  1. After Proxmox is installed, login as root with password set during install and install basic utilities.
    apt update
    apt install vim screen sudo apt-transport-https curl
  2. Set a basic vim configuration.
    echo "filetype indent plugin on
    syntax on
    set mouse-=a
    set background=dark" > ~/.vimrc


CLI Command Aliases[edit]

  1. Login to the CLI using the root user and password set during install.
  2. Configure system-wide aliases.
    cat <<ALIAS >> /etc/bash.bashrc 
    alias ll='ls -lhF'
    alias lal='ls -alhF'
    ALIAS
    source /etc/bash.bashrc


Repositories[edit]

  1. Login to the CLI using the root user and password set during install.
  2. Add the free Proxmox repository to the list of software repositories.
    echo -e "\ndeb http://download.proxmox.com/debian/pve buster pve-no-subscription" >> /etc/apt/sources.list
  3. To prevent error messages accessing the Proxmox VE Enterprise Repository, remove the repository.
    rm /etc/apt/sources.list.d/pve-enterprise.list
  4. Update the system.
    apt update && apt full-upgrade && apt autoremove
  5. Once all the updates are complete, reboot the system.
    systemctl reboot



Remove Subscription Notification[edit]

NOTE: This will have to be re-done every time Proxmox is updated as the modified file will be overwritten by the update.

  1. Login to the CLI using the root user and password set during install.
  2. Remove the subscription pop-up.
    sed -i.bak "s/data.status !== 'Active'/false/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
  3. Restart the Proxmox web interface
    systemctl restart pveproxy


Web Interface Redirect[edit]

  1. Login to the CLI using the root user and password set during install.
  2. Install Nginx.
    apt install nginx
  3. Disable default Nginx config.
    rm /etc/nginx/sites-enabled/default
  4. Configure Nginx to redirect HTTP to HTTPS and proxy to the default Proxmox port.
    cat <<PROXMOX > /etc/nginx/sites-available/proxmox
    server {
            listen 80 default_server;
            server_name _;
            return 301 https://\$host\$request_uri;
    }
    server {
            listen 443 ssl default_server;
            server_name _;
    
            ssl_certificate_key /etc/pve/nodes/$HOSTNAME/pve-ssl.key;
            ssl_certificate /etc/pve/nodes/$HOSTNAME/pve-ssl.pem;
    
            location / {
                    proxy_pass https://localhost:8006;
                    proxy_set_header X-Forwarded-Proto https;
                    proxy_http_version 1.1;
                    proxy_set_header Connection \$http_connection;
                    proxy_set_header Origin http://\$host;
                    proxy_set_header Upgrade \$http_upgrade;
            }
    }
    PROXMOX
    ln -s /etc/nginx/sites-available/proxmox /etc/nginx/sites-enabled/proxmox
  5. Disable the default Nginx systemd unit.
    systemctl mask --now nginx
  6. Create a custom Nginx systemd unit to wait until after Proxmox is started.
    cp /lib/systemd/system/nginx.service /etc/systemd/system/nginx-pve.service
    sed -i -e 's/^After=\(.*\)$/After=\1 etc-pve.mount pve-cluster.service pveproxy.service/' /etc/systemd/system/nginx-pve.service
    systemctl daemon-reload
  7. Start the custom Nginx systemd unit.
    systemctl enable --now nginx-pve


Firewall[edit]

  1. Login to the CLI using the root user and password set during install.
  2. Create the necessary directory.
    install -o root -g www-data -m 755 -d /etc/pve/firewall
  3. Add appropriate rules to the firewall config.
    cat <<FWCONF > /etc/pve/firewall/cluster.fw
    [OPTIONS]
      
    enable: 1
    
    [ALIASES]
    
    clients 192.168.1.0/24
    
    [RULES]
    
    IN Ping(ACCEPT) -log nolog
    IN ACCEPT -source clients -p tcp -dport 3128 -log nolog
    IN ACCEPT -source clients -p tcp -dport 8006 -log nolog
    IN HTTPS(ACCEPT) -source clients -log nolog
    IN HTTP(ACCEPT) -source clients -log nolog
    IN SSH(ACCEPT) -source clients -log nolog
    FWCONF
    • Define a client subnet in the ALIASES section. Change this to match the subnet that is actually in use.
    • Protocol Ping is for allowing ping to the proxmox server.
    • Port 3128 is for the remote console connections.
    • Port 8006 is for the management web interface (without the Nginx proxy).
    • Protocols HTTP and HTTPS are for the management web interface (with the Nginx proxy).
    • Protocol SSH is for CLI managemnt.
  4. Restart the firewall to apply the rules.
    pve-firewall restart


NTP[edit]

  1. Login to the CLI using the root user and password set during install.
  2. Set the NTP servers.
    sed -i.bak '/\[Time\]/a NTP=ntp-b.nist.gov pool.ntp.org' /etc/systemd/timesyncd.conf
  3. Restart NTP.
    systemctl restart systemd-timesyncd


DNS[edit]

  1. Log into the web interface as an administrative user.
  2. Click the name of the Proxmox node.
  3. Click DNS in the System section.
  4. Click Edit.
  5. Enter additional DNS servers.
    • 8.8.8.8
    • 1.1.1.1
  6. Click OK.


Certificate[edit]

NOTE: A LetsEncrypt certficiate may be used here, but will probably have to be manually placed in the correct directory.

  1. Log into the web interface as an administrative user.
  2. Click the name of the Proxmox node.
  3. Click Certificates in the System section.
  4. Click Upload Custom Certificate.
  5. Either paste in contents of the private key into the Private Key box or click From File and choose the private key file.
  6. Either paste in contents of the certificate into the Certificate Chain box or click From File and choose the certificate file. Be aware that the certificate and any intermediates/roots should all be included in the file.
  7. Click Upload.
  8. A message will appear that the web interface will be restarted and that the page should be reloaded. Either reload the page manually or wait for it to reload automatically.
  9. Login to the CLI using the root user and password set during install.
  10. Update the Nginx reverse proxy config with the newly uploaded key.
    sed -i.bak -e 's/pve-ssl\./pveproxy-ssl./' /etc/nginx/sites-available/proxmox
  11. Restart Nginx.
    systemctl restart nginx


Authentication[edit]

By default Proxmox has two authentication realms, PAM and PVE.

  • PAM
    • Corresponds to Linux users in the CLI.
    • User's passwords are managed from the CLI.
    • Users can have SSH access or sudo access on the CLI.
  • PVE
    • Users do not have CLI, SSH, or sudo access.
    • All user information can be setup and managed through GUI.


Create Groups[edit]

To separate user and admin privileges, different groups can be created.

  1. Log into the web interface as an administrative user.
  2. Click Datacenter.
  3. Click Groups in the Permissions section.
  4. Click Create.
  5. Name the group user.
  6. Click Create.
  7. Back on the Groups page, click Create again.
  8. Name the group admin.
  9. Click Create.


Set Group Permissions[edit]

These settings should allow the "admin" group full permissions and the "user" group enough permissions only to create, edit, and delete VMs.

  1. Log into the web interface as an administrative user.
  2. Click Datacenter.
  3. Click Permissions.
  4. Click Add, then Group Permission.
  5. Set properties
    • Path: /
    • Group: admin
    • Role: Administrator
  6. Click Add.
  7. Back on the Permissions page, click Add, then Group Permission.
  8. Set properties
    • Path: /
    • Group: user
    • Role: PVEVMAdmin
  9. Click Add.
  10. Back on the Permissions page, click Add, then Group Permission.
  11. Set properties
    • Path: /
    • Group: user
    • Role: PVEDatastoreUser
  12. Click Add.


Adding PAM Users[edit]

  1. Login to the CLI using the root user and password set during install.
  2. Add a user. Replace username with the desired username.
    useradd -m -U -s /bin/bash username</li>
    
    <li> Add the user to the <code>sudo</code> group if they need sudo privileges on the CLI. Replace <strong>username</strong> with the username of the user that needs full <code>sudo</code> access.
    <pre>usermod -aG sudo username
  3. Log into the web interface as an administrative user.
  4. Click Datacenter.
  5. Click Users in the Permissions section.
  6. Click Add.
  7. Set properties
    • User name: same username as created in the CLI
    • Realm: Linux PAM standard authentication
    • Group: either net or admin as appropriate
    • First Name: user's first name
    • Last Name: user's last name
  8. Click Add



SSH[edit]

  • This section should not be done until after a cluster is created if one will be made. The cluster creation process requires root user password login.
  • This section should not be done until there is a non-root user created and that user set up with a SSH key login. Otherwise access to SSH may be lost.
  1. Login to the CLI using the root user and password set during install.
  2. Configure secure settings for SSH.
    sed -i.bak -e 's/^\s*Subsystem\s\+sftp\(.*\)/Subsystem sftp\1 -f AUTHPRIV -l INFO/' /etc/ssh/sshd_config
    cat <<SSHCONF >> /etc/ssh/sshd_config
    
    HostKey /etc/ssh/ssh_host_ed25519_key
    HostKey /etc/ssh/ssh_host_rsa_key
    HostKey /etc/ssh/ssh_host_ecdsa_key
    KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
    Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
    MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com
    AuthenticationMethods publickey
    LogLevel VERBOSE
    SSHCONF
  3. Restart SSH.
    systemctl restart ssh


Optional Setup[edit]

Clustering[edit]

Proxmox servers can be joined together in a cluster for easier management and to enable such features as migrations. This is most effective if there is some shared storage between the nodes such as a SAN, a NAS, or using the built-in Ceph distributed storage.


Creating A Cluster[edit]

  1. Make sure to have the root user password for each of the nodes that will join the cluster.
  2. Log into the web interface as an administrative user.
  3. Click Datacenter.
  4. Click Cluster.
  5. Click Create Cluster.
  6. Enter the desired name in the Cluster Name box.
  7. Click Create.


Adding Nodes To The Cluster[edit]

  1. Make sure to have the root user password for each of the nodes that will join the cluster.
  2. Log into the web interface of one of the nodes already in the cluster as an administrative user.
  3. Click Datacenter.
  4. Click Cluster.
  5. Click Join Information.
  6. Click Copy Information.
  7. Log into the web interface of the node to be joined to the cluster as an administrative user.
  8. Click Datacenter.
  9. Click Cluster.
  10. Click Join Cluster.
  11. Paste in the previously copied join information into the Information box.
  12. Enter the root user password of the already joined cluster node into the Password box.
  13. Click Join.
  14. The web interface will likely lose connection. Go back to the web interface of the node that was already in the cluster and the new node should have appeared.


Removing Nodes From A Cluster[edit]

  1. Login to the CLI of one of the other nodes using the root user and password set during install.
  2. List the nodes.
    pvecm nodes
  3. Power off the node to be removed. It is critically important that the node be powered off and never again be powered on.
  4. Delete the node. Replace name with the name of the node.
    pvecm delnode name


Monitoring The Cluster[edit]

  1. Login to the CLI of one of the nodes in the cluster using the root user and password set during install.
  2. To view all the nodes.
    pvecm nodes
  3. To view the status of the cluster.
    pvecm status


Ceph Distributed Storage[edit]

Prerequisites[edit]

  • The Proxmox servers should be in a cluster.
  • There should be at least three members in the cluster.
  • There should be some unused disks on each cluster node.
  • Ideally each node would have an identical number and size of disks.


Dedicated Storage Network[edit]

It is highly recommended to use a separate physical network connection for the Ceph storage. Using the same network connection for storage and VMs can cause significant performance problems.

The separate physical connection should be on a different network, e.g. a VLAN or a separate "dumb" switch. The separate network does not have to be connected to the Internet or any other network since it will only be used for storage traffic between the nodes.

  1. Ensure that one or more interfaces on the server are connected to access ports on a VLAN with no defined subnet (L2 only).
  2. Log into the web interface as an administrative user.
  3. Click on the first node in the cluster.
  4. Click on Network in the System section.
  5. Select the interface to be used for a dedicated storage network (or create a Linux Bond).
  6. Click Edit.
  7. Set the following options.
    • Autostart: checked
    • IPv4/CIDR: use a private IP, e.g. 192.168.123.1/24
  8. Click OK.
  9. Reboot the node for the settings to take effect.
  10. Once the node has finished rebooting, repeat the steps on all the other nodes, using a differen IP address in the private network for each.


Firewall[edit]
  1. Log into the web interface as an administrative user.
  2. Click Datacenter.
  3. Click Firewall.
  4. Click Add.
  5. Add a firewall rule for the storage network with the following properties.
    • Enable: checked
    • Source: subnet of the storage network, e.g. 192.168.123.0/24
  6. Click Add


Initial Setup[edit]

  1. Log into the web interface as an administrative user.
  2. Click on the first node in the cluster.
  3. Click on Ceph.
  4. Click on the button to Install Ceph.
  5. In the dialog that pops up, click Next.
  6. When prompted to continue, enter y and press Enter.
  7. When the install is finished, click Next.
  8. Choose the previously set up storage network in the Public Network drop-down.
  9. Click Next.
  10. Click Finish.
  11. For each other cluster node, repeat the process to install Ceph. The configuration step will be skipped. It only takes place on the first node.


Add Monitors and Managers[edit]

There should be at least 3 monitors and 3 managers for small Ceph clusters. The monitors and managers should be on the same nodes.

  1. Log into the web interface as an administrative user.
  2. Click on the first node in the cluster.
  3. Click on Ceph.
  4. Click on Monitor.
  5. Under Monitor, click Create.
  6. Choose one of the other nodes.
  7. Click Create.
  8. Repeat for another node for a total of 3 monitors.
  9. Under Manager, click Create.
  10. Choose the same node as the monitor.
  11. Repeat for another node for a total of 3 managers.


Add OSDs[edit]

A Ceph OSD is basically a disk.

  1. Log into the web interface as an administrative user.
  2. Click on the first node in the cluster.
  3. Click on Ceph.
  4. Click on OSD.
  5. Click Create: OSD.
  6. Choose and unused disk and click Create.
  7. Repeat for each unused disk on the node.
  8. Repeat for each node in the cluster.


Add Pool[edit]

A Ceph pool is a storage area that will be used for VM or container disks.

  1. Log into the web interface as an administrative user.
  2. Click on the first node in the cluster.
  3. Click on Ceph.
  4. Click on Pools.
  5. Click Create.
  6. Enter an appropriate name, e.g. vm_disks.
  7. Use the online <a href="https://ceph.com/pgcalc/" class="urlextern" title="https://ceph.com/pgcalc/" rel="nofollow">PG calc tool</a> to determine the pg_num setting.
  8. Click Create.


Add CephFS[edit]

The CephFS is a storage area that will be used for ISO or container images and backups.

  1. Log into the web interface as an administrative user.
  2. Click on the first node in the cluster.
  3. Click on Ceph.
  4. Click on CephFS.
  5. Click Create under Metadata Servers.
  6. Choose the first node.
  7. Click Create.
  8. Repeat for each node.
  9. Click Create CephFS.
  10. Enter an appropriate name, e.g. iso.
  11. Click Create.