Closed
Description
This happens when I login the container, and can't quit by Ctrl-c.
My system is Ubuntu 12.04
, kernel is 3.8.0-25-generic
.
docker version:
root@wutq-docker:~# docker version
Client version: 0.10.0
Client API version: 1.10
Go version (client): go1.2.1
Git commit (client): dc9c28f
Server version: 0.10.0
Server API version: 1.10
Git commit (server): dc9c28f
Go version (server): go1.2.1
Last stable version: 0.10.0
I have used the script https://raw.githubusercontent.com/dotcloud/docker/master/contrib/check-config.sh to check, and all right.
I watch the syslog and found this message:
May 6 11:30:33 wutq-docker kernel: [62365.889369] unregister_netdevice: waiting for lo to become free. Usage count = 3
May 6 11:30:44 wutq-docker kernel: [62376.108277] unregister_netdevice: waiting for lo to become free. Usage count = 3
May 6 11:30:54 wutq-docker kernel: [62386.327156] unregister_netdevice: waiting for lo to become free. Usage count = 3
May 6 11:31:02 wutq-docker kernel: [62394.423920] INFO: task docker:1024 blocked for more than 120 seconds.
May 6 11:31:02 wutq-docker kernel: [62394.424175] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 6 11:31:02 wutq-docker kernel: [62394.424505] docker D 0000000000000001 0 1024 1 0x00000004
May 6 11:31:02 wutq-docker kernel: [62394.424511] ffff880077793cb0 0000000000000082 ffffffffffffff04 ffffffff816df509
May 6 11:31:02 wutq-docker kernel: [62394.424517] ffff880077793fd8 ffff880077793fd8 ffff880077793fd8 0000000000013f40
May 6 11:31:02 wutq-docker kernel: [62394.424521] ffff88007c461740 ffff880076b1dd00 000080d081f06880 ffffffff81cbbda0
May 6 11:31:02 wutq-docker kernel: [62394.424526] Call Trace:
May 6 11:31:02 wutq-docker kernel: [62394.424668] [<ffffffff816df509>] ? __slab_alloc+0x28a/0x2b2
May 6 11:31:02 wutq-docker kernel: [62394.424700] [<ffffffff816f1849>] schedule+0x29/0x70
May 6 11:31:02 wutq-docker kernel: [62394.424705] [<ffffffff816f1afe>] schedule_preempt_disabled+0xe/0x10
May 6 11:31:02 wutq-docker kernel: [62394.424710] [<ffffffff816f0777>] __mutex_lock_slowpath+0xd7/0x150
May 6 11:31:02 wutq-docker kernel: [62394.424715] [<ffffffff815dc809>] ? copy_net_ns+0x69/0x130
May 6 11:31:02 wutq-docker kernel: [62394.424719] [<ffffffff815dc0b1>] ? net_alloc_generic+0x21/0x30
May 6 11:31:02 wutq-docker kernel: [62394.424724] [<ffffffff816f038a>] mutex_lock+0x2a/0x50
May 6 11:31:02 wutq-docker kernel: [62394.424727] [<ffffffff815dc82c>] copy_net_ns+0x8c/0x130
May 6 11:31:02 wutq-docker kernel: [62394.424733] [<ffffffff81084851>] create_new_namespaces+0x101/0x1b0
May 6 11:31:02 wutq-docker kernel: [62394.424737] [<ffffffff81084a33>] copy_namespaces+0xa3/0xe0
May 6 11:31:02 wutq-docker kernel: [62394.424742] [<ffffffff81057a60>] ? dup_mm+0x140/0x240
May 6 11:31:02 wutq-docker kernel: [62394.424746] [<ffffffff81058294>] copy_process.part.22+0x6f4/0xe60
May 6 11:31:02 wutq-docker kernel: [62394.424752] [<ffffffff812da406>] ? security_file_alloc+0x16/0x20
May 6 11:31:02 wutq-docker kernel: [62394.424758] [<ffffffff8119d118>] ? get_empty_filp+0x88/0x180
May 6 11:31:02 wutq-docker kernel: [62394.424762] [<ffffffff81058a80>] copy_process+0x80/0x90
May 6 11:31:02 wutq-docker kernel: [62394.424766] [<ffffffff81058b7c>] do_fork+0x9c/0x230
May 6 11:31:02 wutq-docker kernel: [62394.424769] [<ffffffff816f277e>] ? _raw_spin_lock+0xe/0x20
May 6 11:31:02 wutq-docker kernel: [62394.424774] [<ffffffff811b9185>] ? __fd_install+0x55/0x70
May 6 11:31:02 wutq-docker kernel: [62394.424777] [<ffffffff81058d96>] sys_clone+0x16/0x20
May 6 11:31:02 wutq-docker kernel: [62394.424782] [<ffffffff816fb939>] stub_clone+0x69/0x90
May 6 11:31:02 wutq-docker kernel: [62394.424786] [<ffffffff816fb5dd>] ? system_call_fastpath+0x1a/0x1f
May 6 11:31:04 wutq-docker kernel: [62396.466223] unregister_netdevice: waiting for lo to become free. Usage count = 3
May 6 11:31:14 wutq-docker kernel: [62406.689132] unregister_netdevice: waiting for lo to become free. Usage count = 3
May 6 11:31:25 wutq-docker kernel: [62416.908036] unregister_netdevice: waiting for lo to become free. Usage count = 3
May 6 11:31:35 wutq-docker kernel: [62427.126927] unregister_netdevice: waiting for lo to become free. Usage count = 3
May 6 11:31:45 wutq-docker kernel: [62437.345860] unregister_netdevice: waiting for lo to become free. Usage count = 3
After happend this, I open another terminal and kill this process, and then restart docker, but this will be hanged.
I reboot the host, and it still display that messages for some minutes when shutdown:
Activity
drpancake commentedon May 23, 2014
I'm seeing a very similar issue for eth0. Ubuntu 12.04 also.
I have to power cycle the machine. From
/var/log/kern.log
:egasimus commentedon Jun 4, 2014
Hey, this just started happening for me as well.
Docker version:
Kernel log: http://pastebin.com/TubCy1tG
System details:
Running Ubuntu 14.04 LTS with patched kernel (3.14.3-rt4). Yet to see it happen with the default linux-3.13.0-27-generic kernel. What's funny, though, is that when this happens, all my terminal windows freeze, letting me type a few characters at most before that. The same fate befalls any new ones I open, too - and I end up needing to power cycle my poor laptop just like the good doctor above. For the record, I'm running fish shell in urxvt or xterm in xmonad. Haven't checked if it affects plain bash.
egasimus commentedon Jun 5, 2014
This might be relevant:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1065434#yui_3_10_3_1_1401948176063_2050
Sure enough, one of the times this happened for me was right after
apt-get
ting a package with a ton of dependencies.drpancake commentedon Jun 5, 2014
Upgrading from Ubuntu 12.04.3 to 14.04 fixed this for me without any other changes.
csabahenk commentedon Jul 22, 2014
I experience this on RHEL7, 3.10.0-123.4.2.el7.x86_64
egasimus commentedon Jul 22, 2014
I've noticed the same thing happening with my VirtualBox virtual network interfaces when I'm running 3.14-rt4. It's supposed to be fixed in vanilla 3.13 or something.
spiffytech commentedon Jul 25, 2014
@egasimus Same here - I pulled in hundreds of MB of data before killing the container, then got this error.
spiffytech commentedon Jul 25, 2014
I upgraded to Debian kernel 3.14 and the problem appears to have gone away. Looks like the problem existed in some kernels < 3.5, was fixed in 3.5, regressed in 3.6, and was patched in something 3.12-3.14. https://bugzilla.redhat.com/show_bug.cgi?id=880394
egasimus commentedon Jul 27, 2014
@spiffytech Do you have any idea where I can report this regarding the realtime kernel flavour? I think they're only releasing a RT patch for every other version, and would really hate to see 3.16-rt come out with this still broken. :/
EDIT: Filed it at kernel.org.
ibuildthecloud commentedon Dec 22, 2014
I'm getting this on Ubuntu 14.10 running a 3.18.1. Kernel log shows
I'll send
docker version/info
once the system isn't frozen anymore :)sbward commentedon Dec 23, 2014
We're seeing this issue as well. Ubuntu 14.04, 3.13.0-37-generic
jbalonso commentedon Dec 29, 2014
On Ubuntu 14.04 server, my team has found that downgrading from 3.13.0-40-generic to 3.13.0-32-generic "resolves" the issue. Given @sbward's observation, that would put the regression after 3.13.0-32-generic and before (or including) 3.13.0-37-generic.
I'll add that, in our case, we sometimes see a negative usage count.
rsampaio commentedon Jan 15, 2015
FWIW we hit this bug running lxc on trusty kernel (3.13.0-40-generic #69-Ubuntu) the message appears in dmesg followed by this stacktrace:
MrMMorris commentedon Mar 16, 2015
Ran into this on Ubuntu 14.04 and Debian jessie w/ kernel 3.16.x.
Docker command:
docker run -t -i -v /data/sitespeed.io:/sitespeed.io/results company/dockerfiles:sitespeed.io-latest --name "Superbrowse"
This seems like a pretty bad issue...
656 remaining items