Commit Graph

49262 Commits

Author SHA1 Message Date
Robert Doebbelin
7cabc61e01 fuse: do not use iocb after it may have been freed
There's a race in fuse_direct_IO(), whereby is_sync_kiocb() is called on an
iocb that could have been freed if async io has already completed.  The fix
in this case is simple and obvious: cache the result before starting io.

It was discovered by KASan:

kernel: ==================================================================
kernel: BUG: KASan: use after free in fuse_direct_IO+0xb1a/0xcc0 at addr ffff88036c414390

Signed-off-by: Robert Doebbelin <robert@quobyte.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Fixes: bcba24ccdc ("fuse: enable asynchronous processing direct IO")
Cc: <stable@vger.kernel.org> # 3.10+
2016-03-14 15:02:50 +01:00
Ashish Samant
2e3fcb1ccd btrfs: Print Warning only if ENOSPC_DEBUG is enabled
Dont print warning for ENOSPC error unless ENOSPC_DEBUG is enabled. Use
btrfs_debug if it is enabled.

Signed-off-by: Ashish Samant <ashish.samant@oracle.com>
[ preserve the WARN_ON ]
Signed-off-by: David Sterba <dsterba@suse.com>
2016-03-14 14:59:54 +01:00
Al Viro
ed782b5a70 dcache.c: new helper: __d_add()
d_add() with inode->i_lock already held; common to d_add() and
d_splice_alias().  All ->lookup() instances that end up hashing
the dentry they are given will hash it here.

This almost completes the preparations to parallel lookups
proper - the only remaining bit is taking security_d_instantiate()
past d_rehash() and doing rehashing without dropping ->d_lock.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:17:38 -04:00
Al Viro
de689f5e36 don't bother with __d_instantiate(dentry, NULL)
it's a no-op - bumping ->d_seq is pointless there.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:17:32 -04:00
Al Viro
27f203f655 untangle fsnotify_d_instantiate() a bit
First of all, don't bother calling it if inode is NULL -
that makes inode argument unused.  Moreover, do it *before*
dropping ->d_lock, not right after that (and don't bother
grabbing ->d_lock in it, of course).

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:17:28 -04:00
Al Viro
34d0d19dc0 uninline d_add()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:17:24 -04:00
Al Viro
668d0cd56e replace d_add_unique() with saner primitive
new primitive: d_exact_alias(dentry, inode).  If there is an unhashed
dentry with the same name/parent and given inode, rehash, grab and
return it.  Otherwise, return NULL.  The only caller of d_add_unique()
switched to d_exact_alias() + d_splice_alias().

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:17:20 -04:00
Al Viro
e12a4e8a04 quota: use lookup_one_len_unlocked()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:16:48 -04:00
Al Viro
85f40482bc cifs_get_root(): use lookup_one_len_unlocked()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:16:44 -04:00
Al Viro
130f9ab75d nfs_lookup: don't bother with d_instantiate(dentry, NULL)
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:16:40 -04:00
Al Viro
9d95afd597 kill dentry_unhash()
the last user is gone

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:16:33 -04:00
Al Viro
f8b31710e4 ceph_fill_trace(): don't bother with d_instantiate(dn, NULL)
... and use d_add(dn, NULL) in case we need to hash a negative
unhashed rather than using d_rehash() directly.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:16:06 -04:00
Al Viro
de4acda16e autofs4: don't bother with d_instantiate(dentry, NULL) in ->lookup()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:16:00 -04:00
Al Viro
5cf3b560af configfs: move d_rehash() into configfs_create() for regular files
... and turn it into d_add in there

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:15:55 -04:00
Al Viro
f7380af04b ceph: don't bother with d_rehash() in splice_dentry()
d_splice_alias() guarantees that it'll be always hashed

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:15:51 -04:00
Al Viro
949a852e46 namei: teach lookup_slow() to skip revalidate
... and make mountpoint_last() use it.  That makes all
candidates for lookup with parent locked shared go
through lookup_slow().

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:15:46 -04:00
Al Viro
e3c1392808 namei: massage lookup_slow() to be usable by lookup_one_len_unlocked()
Return dentry and don't pass nameidata or path; lift crossing mountpoints
into the caller.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:15:40 -04:00
Al Viro
d6d95ded91 lookup_one_len_unlocked(): use lookup_dcache()
No need to lock parent just because of ->d_revalidate() on child;
contrary to the stale comment, lookup_dcache() *can* be used without
locking the parent.  Result can be moved as soon as we return, of
course, but the same is true for lookup_one_len_unlocked() itself.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:15:36 -04:00
Al Viro
74ff0ffc7f namei: simplify invalidation logics in lookup_dcache()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:15:31 -04:00
Al Viro
e9742b5332 namei: change calling conventions for lookup_{fast,slow} and follow_managed()
Have lookup_fast() return 1 on success and 0 on "need to fall back";
lookup_slow() and follow_managed() return positive (1) on success.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:14:35 -04:00
Al Viro
5d0f49c136 namei: untanlge lookup_fast()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-03-14 00:14:25 -04:00
vikram.jadhav07
0304688676 ext4: clean up error handling in the MMP support
There is memory leak as both caller function kmmpd() and callee
read_mmp_block() not releasing bh_check  (i.e buffer_head).
Given patch fixes this problem.

[ Additional changes suggested by Andreas Dilger -- TYT ]

Signed-off-by: Jadhav Vikram <vikramjadhavpucsd2007@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-03-13 17:56:52 -04:00
Michal Hocko
490c1b444c jbd2: do not fail journal because of frozen_buffer allocation failure
Journal transaction might fail prematurely because the frozen_buffer
is allocated by GFP_NOFS request:
[   72.440013] do_get_write_access: OOM for frozen_buffer
[   72.440014] EXT4-fs: ext4_reserve_inode_write:4729: aborting transaction: Out of memory in __ext4_journal_get_write_access
[   72.440015] EXT4-fs error (device sda1) in ext4_reserve_inode_write:4735: Out of memory
(...snipped....)
[   72.495559] do_get_write_access: OOM for frozen_buffer
[   72.495560] EXT4-fs: ext4_reserve_inode_write:4729: aborting transaction: Out of memory in __ext4_journal_get_write_access
[   72.496839] do_get_write_access: OOM for frozen_buffer
[   72.496841] EXT4-fs: ext4_reserve_inode_write:4729: aborting transaction: Out of memory in __ext4_journal_get_write_access
[   72.505766] Aborting journal on device sda1-8.
[   72.505851] EXT4-fs (sda1): Remounting filesystem read-only

This wasn't a problem until "mm: page_alloc: do not lock up GFP_NOFS
allocations upon OOM" because small GPF_NOFS allocations never failed.
This allocation seems essential for the journal and GFP_NOFS is too
restrictive to the memory allocator so let's use __GFP_NOFAIL here to
emulate the previous behavior.

Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-03-13 17:38:20 -04:00
Konstantin Khlebnikov
adb7ef600c ext4: use __GFP_NOFAIL in ext4_free_blocks()
This might be unexpected but pages allocated for sbi->s_buddy_cache are
charged to current memory cgroup. So, GFP_NOFS allocation could fail if
current task has been killed by OOM or if current memory cgroup has no
free memory left. Block allocator cannot handle such failures here yet.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-03-13 17:29:06 -04:00
Aihua Zhang
a2821e34df ext4: fix compile error while opening the macro DOUBLE_CHECK
the error is:
    fs/ext4/mballoc.c:475:43: error: 'struct ext4_group_info' has
no member named 'bb_bitmap'.
    so, the definition of macro DOUBLE_CHECK should before
'struct ext4_group_info', I fixed it, and I moved the macro
AGGRESSIVE_CHECK together, because I think they shoule be together.

Signed-off-by: Aihua Zhang <zhangaihua1@huawei.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-03-13 17:18:12 -04:00
Ales Novak
7915a861c0 ext4: print ext4 mount option data_err=abort correctly
If data_err=abort option is specified for an ext3/ext4 mount,
/proc/mounts does show it as "(null)". This is caused by token2str()
returning NULL for Opt_data_err_abort (due to its pattern containing
'=').

Signed-off-by: Ales Novak <alnovak@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-03-12 21:55:50 -05:00
Eryu Guan
5e1021f2b6 ext4: fix NULL pointer dereference in ext4_mark_inode_dirty()
ext4_reserve_inode_write() in ext4_mark_inode_dirty() could fail on
error (e.g. EIO) and iloc.bh can be NULL in this case. But the error is
ignored in the following "if" condition and ext4_expand_extra_isize()
might be called with NULL iloc.bh set, which triggers NULL pointer
dereference.

This is uncovered by commit 8b4953e13f ("ext4: reserve code points for
the project quota feature"), which enlarges the ext4_inode size, and
run the following script on new kernel but with old mke2fs:

  #/bin/bash
  mnt=/mnt/ext4
  devname=ext4-error
  dev=/dev/mapper/$devname
  fsimg=/home/fs.img

  trap cleanup 0 1 2 3 9 15

  cleanup()
  {
          umount $mnt >/dev/null 2>&1
          dmsetup remove $devname
          losetup -d $backend_dev
          rm -f $fsimg
          exit 0
  }

  rm -f $fsimg
  fallocate -l 1g $fsimg
  backend_dev=`losetup -f --show $fsimg`
  devsize=`blockdev --getsz $backend_dev`

  good_tab="0 $devsize linear $backend_dev 0"
  error_tab="0 $devsize error $backend_dev 0"

  dmsetup create $devname --table "$good_tab"

  mkfs -t ext4 $dev
  mount -t ext4 -o errors=continue,strictatime $dev $mnt

  dmsetup load $devname --table "$error_tab" && dmsetup resume $devname
  echo 3 > /proc/sys/vm/drop_caches
  ls -l $mnt
  exit 0

[ Patch changed to simplify the function a tiny bit. -- Ted ]

Signed-off-by: Eryu Guan <guaneryu@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-03-12 21:40:32 -05:00
Linus Torvalds
2a62ec0af2 Merge tag 'xfs-for-linus-4.5-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs
Pull xfs fixes from Dave Chinner:
 "This is a fix for a regression introduced in 4.5-rc1 by the new torn
  log write detection code.  The regression only affects people moving a
  clean filesystem between machines/kernels of different architecture
  (such as changing between 32 bit and 64 bit kernels), but this is the
  recommended (and only!) safe way to migrate a filesystem between
  architectures so we really need to ensure it works.

  The changes are larger than I'd prefer right at the end of the release
  cycle, but the majority of the change is just factoring code to enable
  the detection of a clean log at the correct time to avoid this issue.

  Changes:

   - Only perform torn log write detection on dirty logs.  This prevents
     failures being detected due to a clean filesystem being moved
     between machines or kernels of different architectures (e.g.  32 ->
     64 bit, BE -> LE, etc).  This fixes a regression introduced by the
     torn log write detection in 4.5-rc1"

* tag 'xfs-for-linus-4.5-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs:
  xfs: only run torn log write detection on dirty logs
  xfs: refactor in-core log state update to helper
  xfs: refactor unmount record detection into helper
  xfs: separate log head record discovery from verification
2016-03-11 10:21:32 -08:00
Linus Torvalds
63cf207e93 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull vfs fixes from Al Viro:
 "A couple of fixes: Fix for my dumb braino in ncpfs and a long-standing
  breakage on recovery from failed rename() in jffs2"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  jffs2: reduce the breakage on recovery from halfway failed rename()
  ncpfs: fix a braino in OOM handling in ncp_fill_cache()
2016-03-11 10:13:49 -08:00
Dan Carpenter
07c9a8e077 btrfs: scrub: silence an uninitialized variable warning
It's basically harmless if "ref_level" isn't initialized since it's only
used for an error message, but it causes a static checker warning.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-03-11 17:21:59 +01:00
Anand Jain
ebb8765b2d btrfs: move btrfs_compression_type to compression.h
So that its better organized.

Signed-off-by: Anand Jain <anand.jain@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-03-11 17:12:46 +01:00
Anand Jain
8ae1af3cd1 btrfs: rename btrfs_print_info to btrfs_print_mod_info
So that it indicates what it does.

Signed-off-by: Anand Jain <anand.jain@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-03-11 17:12:46 +01:00
Satoru Takeuchi
3c1d84b71e Btrfs: Show a warning message if one of objectid reaches its highest value
It's better to show a warning message for the exceptional case
that one of objectid (in most case, inode number) reaches its
highest value. For example, if inode cache is off and this event
happens, we can't create any file even if there are not so many files.
This message ease detecting such problem.

Signed-off-by: Satoru Takeuchi <takeuchi_satoru@jp.fujitsu.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-03-11 17:12:35 +01:00
Rasmus Villemoes
02def69fae btrfs: use kbasename in btrfsic_mount
This is more readable.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Reviewed-by Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-03-11 16:55:52 +01:00
Wiebe, Wladislav (Nokia - DE/Ulm)
764fd639d7 pstore: Add support for 64 Bit address space
Some architectures have their reserved RAM in 64 Bit address space.
Therefore convert mem_address module parameter to ullong.

Signed-off-by: Wladislav Wiebe <wladislav.wiebe@nokia.com>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2016-03-10 09:43:36 -08:00
Geliang Tang
a8ed9b8695 ext4: drop unneeded BUFFER_TRACE in ext4_delete_inline_entry()
BUFFER_TRACE info "call ext4_handle_dirty_metadata" doesn't match the
code, so drop it.

Signed-off-by: Geliang Tang <geliangtang@163.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-03-10 00:18:57 -05:00
Adam Buchbinder
b8a07463c8 ext4: fix misspellings in comments.
Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-03-09 23:49:05 -05:00
OGAWA Hirofumi
c0a2ad9b50 jbd2: fix FS corruption possibility in jbd2_journal_destroy() on umount path
On umount path, jbd2_journal_destroy() writes latest transaction ID
(->j_tail_sequence) to be used at next mount.

The bug is that ->j_tail_sequence is not holding latest transaction ID
in some cases. So, at next mount, there is chance to conflict with
remaining (not overwritten yet) transactions.

	mount (id=10)
	write transaction (id=11)
	write transaction (id=12)
	umount (id=10) <= the bug doesn't write latest ID

	mount (id=10)
	write transaction (id=11)
	crash

	mount
	[recovery process]
		transaction (id=11)
		transaction (id=12) <= valid transaction ID, but old commit
                                       must not replay

Like above, this bug become the cause of recovery failure, or FS
corruption.

So why ->j_tail_sequence doesn't point latest ID?

Because if checkpoint transactions was reclaimed by memory pressure
(i.e. bdev_try_to_free_page()), then ->j_tail_sequence is not updated.
(And another case is, __jbd2_journal_clean_checkpoint_list() is called
with empty transaction.)

So in above cases, ->j_tail_sequence is not pointing latest
transaction ID at umount path. Plus, REQ_FLUSH for checkpoint is not
done too.

So, to fix this problem with minimum changes, this patch updates
->j_tail_sequence, and issue REQ_FLUSH.  (With more complex changes,
some optimizations would be possible to avoid unnecessary REQ_FLUSH
for example though.)

BTW,

	journal->j_tail_sequence =
		++journal->j_transaction_sequence;

Increment of ->j_transaction_sequence seems to be unnecessary, but
ext3 does this.

Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
2016-03-09 23:47:25 -05:00
Jan Kara
2d90c160e5 ext4: more efficient SEEK_DATA implementation
Using SEEK_DATA in a huge sparse file can easily lead to sotflockups as
ext4_seek_data() iterates hole block-by-block. Fix the problem by using
returned hole size from ext4_map_blocks() and thus skip the hole in one
go.

Update also SEEK_HOLE implementation to follow the same pattern as
SEEK_DATA to make future maintenance easier.

Furthermore we add cond_resched() to both ext4_seek_data() and
ext4_seek_hole() to avoid softlockups in case evil user creates huge
fragmented file and we have to go through lots of extents.

Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-03-09 23:11:13 -05:00
Jan Kara
e3fb8eb14e ext4: cleanup handling of bh->b_state in DAX mmap
ext4_dax_mmap_get_block() updates bh->b_state directly instead of using
ext4_update_bh_state(). This is mostly a cosmetic issue since DAX code
always passes on-stack buffer_head but clean this up to make code more
uniform.

Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-03-09 23:03:27 -05:00
Jan Kara
facab4d971 ext4: return hole from ext4_map_blocks()
Currently, ext4_map_blocks() just returns 0 when it finds a hole and
allocation is not requested. However we have all the information
available to tell how large the hole actually is and there are callers
of ext4_map_blocks() which would save some block-by-block hole iteration
if they knew this information. So fill in struct ext4_map_blocks even
for holes with the information we have. We keep returning 0 for holes to
maintain backward compatibility of the function.

Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-03-09 22:54:00 -05:00
Jan Kara
140a52508a ext4: factor out determining of hole size
ext4_ext_put_gap_in_cache() determines hole size in the extent tree,
then trims this with possible delayed allocated blocks, and inserts the
result into the extent status tree. Factor out determination of the size
of the hole in the extent tree as we will need this information in
ext4_ext_map_blocks() as well.

Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-03-09 22:46:57 -05:00
Linus Torvalds
718e47a573 Merge tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4
Pull ext4 fix from Ted Ts'o:
 "This fixes a regression which crept in v4.5-rc5"

* tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
  ext4: iterate over buffer heads correctly in move_extent_per_page()
2016-03-09 19:33:05 -08:00
Jan Kara
87d8a74b56 ext4: fix setting of referenced bit in ext4_es_lookup_extent()
We were setting referenced bit on the extent structure we return from
ext4_es_lookup_extent() which is just a private structure on stack. Thus
setting had no effect. Set the bit in the structure in the status tree
instead.

Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-03-09 22:26:55 -05:00
Eryu Guan
6ffe77bad5 ext4: iterate over buffer heads correctly in move_extent_per_page()
In commit bcff24887d ("ext4: don't read blocks from disk after extents
being swapped") bh is not updated correctly in the for loop and wrong
data has been written to disk. generic/324 catches this on sub-page
block size ext4.

Fixes: bcff24887d ("ext4: don't read blocks from disk after extentsbeing swapped")
Signed-off-by: Eryu Guan <guaneryu@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-03-09 21:37:53 -05:00
Ross Zwisler
30f471fd88 dax: check return value of dax_radix_entry()
dax_pfn_mkwrite() previously wasn't checking the return value of the
call to dax_radix_entry(), which was a mistake.

Instead, capture this return value and return the appropriate VM_FAULT_
value.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Matthew Wilcox <willy@linux.intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-09 15:43:42 -08:00
Jan Kara
566e8dfd88 ocfs2: fix return value from ocfs2_page_mkwrite()
ocfs2_page_mkwrite() could mistakenly return error code instead of
mkwrite status value.  Fix it.

Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Cc: Joseph Qi <joseph.qi@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-09 15:43:42 -08:00
Martin Brandenburg
acfcbaf192 orangefs: make fs_mount_pending static
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
2016-03-09 13:26:39 -05:00
Martin Brandenburg
c62da5853d orangefs: Avoid symlink upcall if target is too long.
Previously the client-core detected this condition by sheer luck!

Since we used strncpy, no NUL byte would be included on the name. The
client-core would call strlen, which would read past the end of its
buffer, but return a number large enough that the client-core would
return ENAMETOOLONG.

Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
2016-03-09 13:26:39 -05:00
Mike Marshall
162ada7764 Orangefs: improve the POSIXness of interrupted writes...
Don't return EINTR on interrupted writes if some data has already
been written.

Signed-off-by: Mike Marshall <hubcap@omnibond.com>
2016-03-09 13:12:37 -05:00