When I try to start gnome-terminal through xterm, i see this error: Error calling StartServiceByName for org.gnome.Terminal
To fix this issue: $localectl set-locale LANG=”en_US.UTF-8″
Reboot
When I try to start gnome-terminal through xterm, i see this error: Error calling StartServiceByName for org.gnome.Terminal
To fix this issue: $localectl set-locale LANG=”en_US.UTF-8″
Reboot
Socket connections uses file descriptor and it’s limited.
For Ubuntu, you can ulimit -a to see the current limit. (ulimit is not a binary command, so that no need sudo)
To increase the openfile to a maximum limit of 4096, you can ulimit -n 4096.
More than 4096 must edit the system file /etc/security/limits.conf.
In the current terminal, you will need su to make the settings take effect.
To resolve this issue, first list all running jobs
salt-run jobs.active
https://docs.saltstack.com/en/latest/topics/jobs/
Kill the job
# kill all jobs salt '*' saltutil.kill_all_jobs # kill the job with id salt '*' saltutil.kill_job <job id>
Turn off secure boot in your BIOS.
and then
sudo apt-get install linux-headers-$(uname -r|sed 's,[^-]*-[^-]*-,,') virtualbox
Background: We have a load balancer powered by LVS + ldirectord. You can find the guide on how to set it up by your own Here.
Recently when we increased the web server pool to 10 servers, we found that the load is not perfectly balanced. After reading some documentation, we found that it could be related to LVS persistence.
Below are some very interesting findings.
quiescent = yes|no If yes, then when real or failback servers are determined to be down, they are not actually removed from the kernel’s LVS table. Rather, their weight is set to zero which means that no new connections will be accepted. This has the side effect, that if the real server has persistent connections, new connections from any existing clients will continue to be routed to the real server, until the persistent timeout can expire. See ipvsadm for more information on persistent connections. This side-effect can be avoided by running the following: echo 1 > /proc/sys/net/ipv4/vs/expire_quiescent_template If the proc file isn’t present this probably means that the kernel doesn’t have LVS support, LVS support isn’t loaded, or the kernel is too old to have the proc file. Running ipvsadm as root should load LVS into the kernel if it is possible. If no, then the real or failback servers will be removed from the kernel’s LVS table. The default is yes. If defined in a virtual server section then the global value is overridden. Default: yes
net.ipv4.vs.expire_nodest_conn=0 maintain entry in table (but silently drop any packets sent), allowing service to continue if the ipvsadm table entries are restored. net.ipv4.vs.expire_nodest_conn=1 expire the entry in table immediately and inform client that connection is closed. This is the expected behaviour by some people when running `ipvsadm -C`
expire_quiescent_template - BOOLEAN 0 - disabled (default) not 0 - enabled When set to a non-zero value, the load balancer will expire persistant templates when the destination server is quiescent. This may be useful, when a user makes a destination server quiescent by setting its weight to 0 and it is desired that subsequent otherwise persistant connections are sent to a different destination server. By default new persistant connections are allowed to quiescent destination servers. If this feature is enabled, the load balancer will expire the persistance template if it is to be used to schedule a new connection and the destination server is quiescent.
Source:
http://linux.die.net/man/8/ipvsadm
http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.persistent_connection.html
I have one virtual server in my machine and I used to login to it via winscp, but it keeps prompting this error recently.
I turned off the firewall in both client and server (actually no one changed it), still the same.
At the end, I found that it’s because of an IP conflict in my local network …