commit ce05b1d31e57b7de6b814073e88bdd403ce71229 Author: Greg Kroah-Hartman Date: Thu Jul 19 12:12:20 2012 -0700 Linux 3.0.38 commit 0851978b661f25192ff763289698f3175b1bab42 Author: Thomas Gleixner Date: Tue Jul 17 13:33:58 2012 -0400 timekeeping: Add missing update call in timekeeping_resume() This is a backport of 3e997130bd2e8c6f5aaa49d6e3161d4d29b43ab0 The leap second rework unearthed another issue of inconsistent data. On timekeeping_resume() the timekeeper data is updated, but nothing calls timekeeping_update(), so now the update code in the timer interrupt sees stale values. This has been the case before those changes, but then the timer interrupt was using stale data as well so this went unnoticed for quite some time. Add the missing update call, so all the data is consistent everywhere. Reported-by: Andreas Schwab Reported-and-tested-by: "Rafael J. Wysocki" Reported-and-tested-by: Martin Steigerwald Cc: John Stultz Cc: Ingo Molnar Cc: Peter Zijlstra , Cc: Prarit Bhargava Signed-off-by: Thomas Gleixner Signed-off-by: John Stultz Signed-off-by: Linus Torvalds Cc: Prarit Bhargava Cc: Thomas Gleixner Signed-off-by: John Stultz Signed-off-by: Greg Kroah-Hartman commit bb6ed34f2a6eeb40608b8ca91f3ec90ec9dca26f Author: John Stultz Date: Tue Jul 17 13:33:57 2012 -0400 hrtimer: Update hrtimer base offsets each hrtimer_interrupt This is a backport of 5baefd6d84163443215f4a99f6a20f054ef11236 The update of the hrtimer base offsets on all cpus cannot be made atomically from the timekeeper.lock held and interrupt disabled region as smp function calls are not allowed there. clock_was_set(), which enforces the update on all cpus, is called either from preemptible process context in case of do_settimeofday() or from the softirq context when the offset modification happened in the timer interrupt itself due to a leap second. In both cases there is a race window for an hrtimer interrupt between dropping timekeeper lock, enabling interrupts and clock_was_set() issuing the updates. Any interrupt which arrives in that window will see the new time but operate on stale offsets. So we need to make sure that an hrtimer interrupt always sees a consistent state of time and offsets. ktime_get_update_offsets() allows us to get the current monotonic time and update the per cpu hrtimer base offsets from hrtimer_interrupt() to capture a consistent state of monotonic time and the offsets. The function replaces the existing ktime_get() calls in hrtimer_interrupt(). The overhead of the new function vs. ktime_get() is minimal as it just adds two store operations. This ensures that any changes to realtime or boottime offsets are noticed and stored into the per-cpu hrtimer base structures, prior to any hrtimer expiration and guarantees that timers are not expired early. Signed-off-by: John Stultz Reviewed-by: Ingo Molnar Acked-by: Peter Zijlstra Acked-by: Prarit Bhargava Link: http://lkml.kernel.org/r/1341960205-56738-8-git-send-email-johnstul@us.ibm.com Signed-off-by: Thomas Gleixner Cc: Prarit Bhargava Cc: Thomas Gleixner Signed-off-by: John Stultz Signed-off-by: Greg Kroah-Hartman commit 22f4bbcfb131e2392c78ad67af35fdd436d4dd54 Author: Thomas Gleixner Date: Tue Jul 17 13:33:56 2012 -0400 timekeeping: Provide hrtimer update function This is a backport of f6c06abfb3972ad4914cef57d8348fcb2932bc3b To finally fix the infamous leap second issue and other race windows caused by functions which change the offsets between the various time bases (CLOCK_MONOTONIC, CLOCK_REALTIME and CLOCK_BOOTTIME) we need a function which atomically gets the current monotonic time and updates the offsets of CLOCK_REALTIME and CLOCK_BOOTTIME with minimalistic overhead. The previous patch which provides ktime_t offsets allows us to make this function almost as cheap as ktime_get() which is going to be replaced in hrtimer_interrupt(). Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Acked-by: Peter Zijlstra Acked-by: Prarit Bhargava Signed-off-by: John Stultz Link: http://lkml.kernel.org/r/1341960205-56738-7-git-send-email-johnstul@us.ibm.com Signed-off-by: Thomas Gleixner Cc: Prarit Bhargava Cc: Thomas Gleixner Signed-off-by: John Stultz Signed-off-by: Greg Kroah-Hartman commit 6c89f2ce05ea7e26a7580ad9eb950f2c4f10891b Author: Thomas Gleixner Date: Tue Jul 17 13:33:55 2012 -0400 hrtimers: Move lock held region in hrtimer_interrupt() This is a backport of 196951e91262fccda81147d2bcf7fdab08668b40 We need to update the base offsets from this code and we need to do that under base->lock. Move the lock held region around the ktime_get() calls. The ktime_get() calls are going to be replaced with a function which gets the time and the offsets atomically. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Acked-by: Peter Zijlstra Acked-by: Prarit Bhargava Signed-off-by: John Stultz Link: http://lkml.kernel.org/r/1341960205-56738-6-git-send-email-johnstul@us.ibm.com Signed-off-by: Thomas Gleixner Cc: Prarit Bhargava Cc: Thomas Gleixner Signed-off-by: John Stultz Signed-off-by: Greg Kroah-Hartman commit 03a90b9a6f7eec70edde4eb1f88fa8a5c058d85e Author: Thomas Gleixner Date: Tue Jul 17 13:33:54 2012 -0400 timekeeping: Maintain ktime_t based offsets for hrtimers This is a backport of 5b9fe759a678e05be4937ddf03d50e950207c1c0 We need to update the hrtimer clock offsets from the hrtimer interrupt context. To avoid conversions from timespec to ktime_t maintain a ktime_t based representation of those offsets in the timekeeper. This puts the conversion overhead into the code which updates the underlying offsets and provides fast accessible values in the hrtimer interrupt. Signed-off-by: Thomas Gleixner Signed-off-by: John Stultz Reviewed-by: Ingo Molnar Acked-by: Peter Zijlstra Acked-by: Prarit Bhargava Link: http://lkml.kernel.org/r/1341960205-56738-4-git-send-email-johnstul@us.ibm.com Signed-off-by: Thomas Gleixner Cc: Prarit Bhargava Cc: Thomas Gleixner Signed-off-by: John Stultz Signed-off-by: Greg Kroah-Hartman commit d21e4baf4523fec26e3c70cb78b013ad3b245c83 Author: John Stultz Date: Tue Jul 17 13:33:53 2012 -0400 timekeeping: Fix leapsecond triggered load spike issue This is a backport of 4873fa070ae84a4115f0b3c9dfabc224f1bc7c51 The timekeeping code misses an update of the hrtimer subsystem after a leap second happened. Due to that timers based on CLOCK_REALTIME are either expiring a second early or late depending on whether a leap second has been inserted or deleted until an operation is initiated which causes that update. Unless the update happens by some other means this discrepancy between the timekeeping and the hrtimer data stays forever and timers are expired either early or late. The reported immediate workaround - $ data -s "`date`" - is causing a call to clock_was_set() which updates the hrtimer data structures. See: http://www.sheeri.com/content/mysql-and-leap-second-high-cpu-and-fix Add the missing clock_was_set() call to update_wall_time() in case of a leap second event. The actual update is deferred to softirq context as the necessary smp function call cannot be invoked from hard interrupt context. Signed-off-by: John Stultz Reported-by: Jan Engelhardt Reviewed-by: Ingo Molnar Acked-by: Peter Zijlstra Acked-by: Prarit Bhargava Link: http://lkml.kernel.org/r/1341960205-56738-3-git-send-email-johnstul@us.ibm.com Signed-off-by: Thomas Gleixner Cc: Prarit Bhargava Cc: Thomas Gleixner Signed-off-by: John Stultz Signed-off-by: Greg Kroah-Hartman commit 62b787f886e2d96cc7c5428aeee05dbe32a9531b Author: John Stultz Date: Tue Jul 17 13:33:52 2012 -0400 hrtimer: Provide clock_was_set_delayed() This is a backport of f55a6faa384304c89cfef162768e88374d3312cb clock_was_set() cannot be called from hard interrupt context because it calls on_each_cpu(). For fixing the widely reported leap seconds issue it is necessary to call it from hard interrupt context, i.e. the timer tick code, which does the timekeeping updates. Provide a new function which denotes it in the hrtimer cpu base structure of the cpu on which it is called and raise the hrtimer softirq. We then execute the clock_was_set() notificiation from softirq context in run_hrtimer_softirq(). The hrtimer softirq is rarely used, so polling the flag there is not a performance issue. [ tglx: Made it depend on CONFIG_HIGH_RES_TIMERS. We really should get rid of all this ifdeffery ASAP ] Signed-off-by: John Stultz Reported-by: Jan Engelhardt Reviewed-by: Ingo Molnar Acked-by: Peter Zijlstra Acked-by: Prarit Bhargava Link: http://lkml.kernel.org/r/1341960205-56738-2-git-send-email-johnstul@us.ibm.com Signed-off-by: Thomas Gleixner Cc: Prarit Bhargava Cc: Thomas Gleixner Signed-off-by: John Stultz Signed-off-by: Greg Kroah-Hartman commit c7e2580578671c4d19a1a83e6fdb2482cc136283 Author: Thomas Gleixner Date: Tue Jul 17 13:33:51 2012 -0400 time: Move common updates to a function This is a backport of cc06268c6a87db156af2daed6e96a936b955cc82 While not a bugfix itself, it allows following fixes to backport in a more straightforward manner. CC: Thomas Gleixner CC: Eric Dumazet CC: Richard Cochran Signed-off-by: Thomas Gleixner Cc: Prarit Bhargava Cc: Thomas Gleixner Signed-off-by: John Stultz Signed-off-by: Greg Kroah-Hartman commit c33f2424c3941986d402c81d380d4e805870a20f Author: John Stultz Date: Tue Jul 17 13:33:50 2012 -0400 timekeeping: Fix CLOCK_MONOTONIC inconsistency during leapsecond This is a backport of fad0c66c4bb836d57a5f125ecd38bed653ca863a which resolves a bug the previous commit. Commit 6b43ae8a61 (ntp: Fix leap-second hrtimer livelock) broke the leapsecond update of CLOCK_MONOTONIC. The missing leapsecond update to wall_to_monotonic causes discontinuities in CLOCK_MONOTONIC. Adjust wall_to_monotonic when NTP inserted a leapsecond. Reported-by: Richard Cochran Signed-off-by: John Stultz Tested-by: Richard Cochran Link: http://lkml.kernel.org/r/1338400497-12420-1-git-send-email-john.stultz@linaro.org Signed-off-by: Thomas Gleixner Cc: Prarit Bhargava Cc: Thomas Gleixner Signed-off-by: John Stultz Signed-off-by: Greg Kroah-Hartman commit 96bab736bad82423c2b312d602689a9078481fa9 Author: Richard Cochran Date: Tue Jul 17 13:33:49 2012 -0400 ntp: Correct TAI offset during leap second This is a backport of dd48d708ff3e917f6d6b6c2b696c3f18c019feed When repeating a UTC time value during a leap second (when the UTC time should be 23:59:60), the TAI timescale should not stop. The kernel NTP code increments the TAI offset one second too late. This patch fixes the issue by incrementing the offset during the leap second itself. Signed-off-by: Richard Cochran Cc: Prarit Bhargava Cc: Thomas Gleixner Signed-off-by: John Stultz Signed-off-by: Greg Kroah-Hartman commit 9c24771f844b6f0708a72cd116953e0a128e5d2a Author: John Stultz Date: Tue Jul 17 13:33:48 2012 -0400 ntp: Fix leap-second hrtimer livelock This is a backport of 6b43ae8a619d17c4935c3320d2ef9e92bdeed05d This should have been backported when it was commited, but I mistook the problem as requiring the ntp_lock changes that landed in 3.4 in order for it to occur. Unfortunately the same issue can happen (with only one cpu) as follows: do_adjtimex() write_seqlock_irq(&xtime_lock); process_adjtimex_modes() process_adj_status() ntp_start_leap_timer() hrtimer_start() hrtimer_reprogram() tick_program_event() clockevents_program_event() ktime_get() seq = req_seqbegin(xtime_lock); [DEADLOCK] This deadlock will no always occur, as it requires the leap_timer to force a hrtimer_reprogram which only happens if its set and there's no sooner timer to expire. NOTE: This patch, being faithful to the original commit, introduces a bug (we don't update wall_to_monotonic), which will be resovled by backporting a following fix. Original commit message below: Since commit 7dffa3c673fbcf835cd7be80bb4aec8ad3f51168 the ntp subsystem has used an hrtimer for triggering the leapsecond adjustment. However, this can cause a potential livelock. Thomas diagnosed this as the following pattern: CPU 0 CPU 1 do_adjtimex() spin_lock_irq(&ntp_lock); process_adjtimex_modes(); timer_interrupt() process_adj_status(); do_timer() ntp_start_leap_timer(); write_lock(&xtime_lock); hrtimer_start(); update_wall_time(); hrtimer_reprogram(); ntp_tick_length() tick_program_event() spin_lock(&ntp_lock); clockevents_program_event() ktime_get() seq = req_seqbegin(xtime_lock); This patch tries to avoid the problem by reverting back to not using an hrtimer to inject leapseconds, and instead we handle the leapsecond processing in the second_overflow() function. The downside to this change is that on systems that support highres timers, the leap second processing will occur on a HZ tick boundary, (ie: ~1-10ms, depending on HZ) after the leap second instead of possibly sooner (~34us in my tests w/ x86_64 lapic). This patch applies on top of tip/timers/core. CC: Sasha Levin CC: Thomas Gleixner Reported-by: Sasha Levin Diagnoised-by: Thomas Gleixner Tested-by: Sasha Levin Cc: Prarit Bhargava Cc: Thomas Gleixner Signed-off-by: John Stultz Signed-off-by: Greg Kroah-Hartman commit 31b83ef7cfda5a7b74446ca70c1e231b24450cbd Author: Michal Kazior Date: Fri Jun 8 10:55:44 2012 +0200 cfg80211: check iface combinations only when iface is running commit f8cdddb8d61d16a156229f0910f7ecfc7a82c003 upstream. Don't validate interface combinations on a stopped interface. Otherwise we might end up being able to create a new interface with a certain type, but won't be able to change an existing interface into that type. This also skips some other functions when interface is stopped and changing interface type. Signed-off-by: Michal Kazior Signed-off-by: Johannes Berg [Fixes regression introduced by cherry pick of 463454b5dbd8] Signed-off-by: Paul Gortmaker commit 1a4eda9788a542347dfc2b1684636bcbe2ee3f79 Author: Eric Dumazet Date: Fri Dec 2 23:41:42 2011 +0000 tcp: drop SYN+FIN messages commit fdf5af0daf8019cec2396cdef8fb042d80fe71fa upstream. Denys Fedoryshchenko reported that SYN+FIN attacks were bringing his linux machines to their limits. Dont call conn_request() if the TCP flags includes SYN flag Reported-by: Denys Fedoryshchenko Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller Cc: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 9c5d9de68ea54ae743b1cb36645c96b8fdcc1c8d Author: Yuri Khan Date: Wed Jul 11 22:12:31 2012 -0700 Input: xpad - add Andamiro Pump It Up pad commit e76b8ee25e034ab601b525abb95cea14aa167ed3 upstream. I couldn't find the vendor ID in any of the online databases, but this mat has a Pump It Up logo on the top side of the controller compartment, and a disclaimer stating that Andamiro will not be liable on the bottom. Signed-off-by: Yuri Khan Signed-off-by: Dmitry Torokhov Signed-off-by: Greg Kroah-Hartman commit c90dab38fe75c7a131a3db8d35aedaf155aede04 Author: Tushar Dave Date: Thu Jul 12 08:56:56 2012 +0000 e1000e: Correct link check logic for 82571 serdes commit d0efa8f23a644f7cb7d1f8e78dd9a223efa412a3 upstream. SYNCH bit and IV bit of RXCW register are sticky. Before examining these bits, RXCW should be read twice to filter out one-time false events and have correct values for these bits. Incorrect values of these bits in link check logic can cause weird link stability issues if auto-negotiation fails. Reported-by: Dean Nelson Signed-off-by: Tushar Dave Reviewed-by: Bruce Allan Tested-by: Jeff Pieper Signed-off-by: Jeff Kirsher Signed-off-by: Greg Kroah-Hartman commit 16f1a5d495c4c8bd246c96052ab5a0fc4fd03646 Author: Stanislaw Gruszka Date: Wed Jul 4 13:10:02 2012 +0200 rt2x00usb: fix indexes ordering on RX queue kick commit efd821182cec8c92babef6e00a95066d3252fda4 upstream. On rt2x00_dmastart() we increase index specified by Q_INDEX and on rt2x00_dmadone() we increase index specified by Q_INDEX_DONE. So entries between Q_INDEX_DONE and Q_INDEX are those we currently process in the hardware. Entries between Q_INDEX and Q_INDEX_DONE are those we can submit to the hardware. According to that fix rt2x00usb_kick_queue(), as we need to submit RX entries that are not processed by the hardware. It worked before only for empty queue, otherwise was broken. Note that for TX queues indexes ordering are ok. We need to kick entries that have filled skb, but was not submitted to the hardware, i.e. started from Q_INDEX_DONE and have ENTRY_DATA_PENDING bit set. From practical standpoint this fixes RX queue stall, usually reproducible in AP mode, like for example reported here: https://bugzilla.redhat.com/show_bug.cgi?id=828824 Reported-and-tested-by: Franco Miceli Reported-and-tested-by: Tom Horsley Signed-off-by: Stanislaw Gruszka Signed-off-by: John W. Linville Signed-off-by: Greg Kroah-Hartman commit 7d50b51a460072da28e45f2c6ebd6f85af34f2c4 Author: Anders Kaseorg Date: Sun Jul 15 17:14:25 2012 -0400 fifo: Do not restart open() if it already found a partner commit 05d290d66be6ef77a0b962ebecf01911bd984a78 upstream. If a parent and child process open the two ends of a fifo, and the child immediately exits, the parent may receive a SIGCHLD before its open() returns. In that case, we need to make sure that open() will return successfully after the SIGCHLD handler returns, instead of throwing EINTR or being restarted. Otherwise, the restarted open() would incorrectly wait for a second partner on the other end. The following test demonstrates the EINTR that was wrongly thrown from the parent’s open(). Change .sa_flags = 0 to .sa_flags = SA_RESTART to see a deadlock instead, in which the restarted open() waits for a second reader that will never come. (On my systems, this happens pretty reliably within about 5 to 500 iterations. Others report that it manages to loop ~forever sometimes; YMMV.) #include #include #include #include #include #include #include #include #define CHECK(x) do if ((x) == -1) {perror(#x); abort();} while(0) void handler(int signum) {} int main() { struct sigaction act = {.sa_handler = handler, .sa_flags = 0}; CHECK(sigaction(SIGCHLD, &act, NULL)); CHECK(mknod("fifo", S_IFIFO | S_IRWXU, 0)); for (;;) { int fd; pid_t pid; putc('.', stderr); CHECK(pid = fork()); if (pid == 0) { CHECK(fd = open("fifo", O_RDONLY)); _exit(0); } CHECK(fd = open("fifo", O_WRONLY)); CHECK(close(fd)); CHECK(waitpid(pid, NULL, 0)); } } This is what I suspect was causing the Git test suite to fail in t9010-svn-fe.sh: http://bugs.debian.org/678852 Signed-off-by: Anders Kaseorg Reviewed-by: Jonathan Nieder Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit 19b0759e73cdf97db6f36ac04bbd37d3feb0657b Author: Takashi Iwai Date: Mon Jun 25 15:07:17 2012 +0200 intel_ips: blacklist HP ProBook laptops commit 88ca518b0bb4161e5f20f8a1d9cc477cae294e54 upstream. intel_ips driver spews the warning message "ME failed to update for more than 1s, likely hung" at each second endlessly on HP ProBook laptops with IronLake. As this has never worked, better to blacklist the driver for now. Signed-off-by: Takashi Iwai Signed-off-by: Matthew Garrett Signed-off-by: Greg Kroah-Hartman commit 64428b015e0bba23744e2a64a011be9738d05356 Author: Todd Poynor Date: Fri Jul 13 15:30:48 2012 +0900 ARM: SAMSUNG: fix race in s3c_adc_start for ADC commit 8265981bb439f3ecc5356fb877a6c2a6636ac88a upstream. Checking for adc->ts_pend already claimed should be done with the lock held. Signed-off-by: Todd Poynor Acked-by: Ben Dooks Signed-off-by: Kukjin Kim Signed-off-by: Greg Kroah-Hartman commit 29224d0b695c89e4f4cfc5f1cb3ef3d697c60dc8 Author: Herton Ronaldo Krzesinski Date: Wed May 16 16:21:52 2012 -0300 mtd: nandsim: don't open code a do_div helper commit 596fd46268634082314b3af1ded4612e1b7f3f03 upstream. We don't need to open code the divide function, just use div_u64 that already exists and do the same job. While this is a straightforward clean up, there is more to that, the real motivation for this. While building on a cross compiling environment in armel, using gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5), I was getting the following build error: ERROR: "__aeabi_uldivmod" [drivers/mtd/nand/nandsim.ko] undefined! After investigating with objdump and hand built assembly version generated with the compiler, I narrowed __aeabi_uldivmod as being generated from the divide function. When nandsim.c is built with -fno-inline-functions-called-once, that happens when CONFIG_DEBUG_SECTION_MISMATCH is enabled, the do_div optimization in arch/arm/include/asm/div64.h doesn't work as expected with the open coded divide function: even if the do_div we are using doesn't have a constant divisor, the compiler still includes the else parts of the optimized do_div macro, and translates the divisions there to use __aeabi_uldivmod, instead of only calling __do_div_asm -> __do_div64 and optimizing/removing everything else out. So to reproduce, gcc 4.6 plus CONFIG_DEBUG_SECTION_MISMATCH=y and CONFIG_MTD_NAND_NANDSIM=m should do it, building on armel. After this change, the compiler does the intended thing even with -fno-inline-functions-called-once, and optimizes out as expected the constant handling in the optimized do_div on arm. As this also avoids a build issue, I'm marking for Stable, as I think is applicable for this case. Signed-off-by: Herton Ronaldo Krzesinski Acked-by: Nicolas Pitre Signed-off-by: Artem Bityutskiy Signed-off-by: David Woodhouse Signed-off-by: Greg Kroah-Hartman commit e49068478916ea599765112c3cf3e31c520bce5c Author: Santosh Nayak Date: Sat Jun 23 07:59:54 2012 -0300 media: dvb-core: Release semaphore on error path dvb_register_device() commit 82163edcdfa4eb3d74516cc8e9f38dd3d039b67d upstream. There is a missing "up_write()" here. Semaphore should be released before returning error value. Signed-off-by: Santosh Nayak Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Greg Kroah-Hartman commit 4ff1ddad40c57605cc33a78699e4559217a06a46 Author: Jeff Moyer Date: Thu Jul 12 09:43:14 2012 -0400 block: fix infinite loop in __getblk_slow commit 91f68c89d8f35fe98ea04159b9a3b42d0149478f upstream. Commit 080399aaaf35 ("block: don't mark buffers beyond end of disk as mapped") exposed a bug in __getblk_slow that causes mount to hang as it loops infinitely waiting for a buffer that lies beyond the end of the disk to become uptodate. The problem was initially reported by Torsten Hilbrich here: https://lkml.org/lkml/2012/6/18/54 and also reported independently here: http://www.sysresccd.org/forums/viewtopic.php?f=13&t=4511 and then Richard W.M. Jones and Marcos Mello noted a few separate bugzillas also associated with the same issue. This patch has been confirmed to fix: https://bugzilla.redhat.com/show_bug.cgi?id=835019 The main problem is here, in __getblk_slow: for (;;) { struct buffer_head * bh; int ret; bh = __find_get_block(bdev, block, size); if (bh) return bh; ret = grow_buffers(bdev, block, size); if (ret < 0) return NULL; if (ret == 0) free_more_memory(); } __find_get_block does not find the block, since it will not be marked as mapped, and so grow_buffers is called to fill in the buffers for the associated page. I believe the for (;;) loop is there primarily to retry in the case of memory pressure keeping grow_buffers from succeeding. However, we also continue to loop for other cases, like the block lying beond the end of the disk. So, the fix I came up with is to only loop when grow_buffers fails due to memory allocation issues (return value of 0). The attached patch was tested by myself, Torsten, and Rich, and was found to resolve the problem in call cases. Signed-off-by: Jeff Moyer Reported-and-Tested-by: Torsten Hilbrich Tested-by: Richard W.M. Jones Reviewed-by: Josh Boyer [ Jens is on vacation, taking this directly - Linus ] Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit d53c2bc79a9044090cafc82530be55d4bb144905 Author: Jean Delvare Date: Thu Jul 12 22:47:37 2012 +0200 hwmon: (it87) Preserve configuration register bits on init commit 41002f8dd5938d5ad1d008ce5bfdbfe47fa7b4e8 upstream. We were accidentally losing one bit in the configuration register on device initialization. It was reported to freeze one specific system right away. Properly preserve all bits we don't explicitly want to change in order to prevent that. Reported-by: Stevie Trujillo Signed-off-by: Jean Delvare Reviewed-by: Guenter Roeck Signed-off-by: Greg Kroah-Hartman