Joe Usershwartzen liked your message with Boxer. On May 28, 2014 at 10:55:39 AM CDT, Kevin Hilman <khilman(a)linaro.org> wrote:Hi Will,Will Deacon writes:> On Mon, May 26, 2014 at 07:56:13PM +0100, Larry Bassel wrote:>> Make calls to ct_user_enter when the kernel is exited>> and ct_user_exit when the kernel is entered (in el0_da,>> el0_ia, el0_svc, el0_irq and all of the "error" paths).>> >> These macros expand to function calls which will only work>> properly if el0_sync and related code has been rearranged>> (in a previous patch of this series).>> >> The calls to ct_user_exit are made after hw debugging has been>> enabled (enable_dbg_and_irq).>> >> The call to ct_user_enter is made at the beginning of the>> kernel_exit macro.>> >> This patch is based on earlier work by Kevin Hilman.>> Save/restore optimizations were also done by Kevin.>> Apologies if we've discussed this before (it rings a bell), but why are we> penalising the fast syscall path with this? Shouldn't TIF_NOHZ contribute to> out _TIF_WORK_MASK, then we could do the tracking on the syscall slow path?I'll answer here since Larry inherited this design decision from me.I considered (and even implemented) forcing the slow syscall pathbased on TIF_NOHZ but decided (perhaps wrongly) not to. I guess thechoice is between:- forcing the overhead of syscall tracing path on all TIF_NOHZ processes- forcing the (much smaller) ct_user_exit overhead on all syscalls, (including the fast syscall path)I had decided that the former was better, but as I write this, I'mthinking that the NOHZ tasks should probably eat the extra overheadsince we expect their interactions with the kernel to be minimal anyways(part of the goal of full NOHZ.)Ultimately, I'm OK with either way and have the other version ready.> I think that would tidy up your mov into x19 too.That's correct. If we force the syscall_trace path, the ct_user_enterwouldn't have to do any context save/restore.> Also -- how do you track ret_from_fork in the child with these patches?Not sure I follow the question, but ret_from_fork callsret_to_user, which calls kernel_exit, which calls ct_user_enter.Kevin--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo(a)vger.kernel.orgMore majordomo info at http://vger.kernel.org/majordomo-info.htmlPlease read the FAQ at http://www.tux.org/lkml/
Hi,
Here is updated patches to address KVM host support in big endian
ARM V7/V8.
Changes since previous version [1] and [2]:
x) joined both v7 [1] and v8 [2] series into one
x) addressed, nearly all, Christoffer's comments to previous
patch series
x) Fixed few issues in 'ARM: KVM: handle 64bit values passed to
mrcc or from mcrr instructions in BE case' that got discovered
after moving to the latest v3.15-rc5 kernel
x) Fixed issue in supporting 32bit guest by V8 BE KVM host
All patches except 'ARM: KVM: one_reg coproc set and
get BE fixes', as far as I see, are best possible proposal.
'ARM: KVM: one_reg coproc set and get BE fixes' changes still
need more discussion. Patch included with this series covers
one possible solution, I'll post notes about other approach
previously suggested by Alex Graf separately.
The patches were tested on top of v3.15-rc5 on TC2 (V7) and
fastmodels (V8). Tested all possible combinations of KVM host
and guests (V7/V8, BE/LE, 64bit/32bit).
Thanks,
Victor
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-February/231881.…
[2] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-February/231889.…
Victor Kamensky (14):
ARM: KVM: switch hypervisor into BE mode in case of BE host
ARM: KVM: fix vgic V7 assembler code to work in BE image
ARM: KVM: handle 64bit values passed to mrcc or from mcrr instructions
in BE case
ARM: KVM: __kvm_vcpu_run function return result fix in BE case
ARM: KVM: vgic mmio should hold data as LE bytes array in BE case
ARM: KVM: MMIO support BE host running LE code
ARM: KVM: one_reg coproc set and get BE fixes
ARM: KVM: enable KVM in Kconfig on big-endian systems
ARM64: KVM: MMIO support BE host running LE code
ARM64: KVM: store kvm_vcpu_fault_info est_el2 as word
ARM64: KVM: fix vgic_bitmap_get_reg function for BE 64bit case
ARM64: KVM: vgic_elrsr and vgic_eisr need to be byteswapped in BE case
ARM64: KVM: set and get of sys registers in BE case
ARM64: KVM: fix big endian issue in access_vm_reg for 32bit guest
arch/arm/include/asm/kvm_asm.h | 18 ++++++
arch/arm/include/asm/kvm_emulate.h | 22 +++++--
arch/arm/kvm/Kconfig | 2 +-
arch/arm/kvm/coproc.c | 118 +++++++++++++++++++++++++++--------
arch/arm/kvm/init.S | 7 ++-
arch/arm/kvm/interrupts.S | 9 ++-
arch/arm/kvm/interrupts_head.S | 20 +++++-
arch/arm64/include/asm/kvm_emulate.h | 22 +++++++
arch/arm64/kvm/hyp.S | 9 ++-
arch/arm64/kvm/sys_regs.c | 33 +++++++---
virt/kvm/arm/vgic.c | 28 +++++++--
11 files changed, 237 insertions(+), 51 deletions(-)
--
1.8.1.4
From: Mark Brown <broonie(a)linaro.org>
This reposting of the series rolls together the work from myself and Zi
Shen, it includes the revisions Lorenzo provided on the last round of
review. It is also available at:
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/misc.git tags/arm64-topology-dt-mpidr
Mark Brown (4):
arm64: topology: Initialise default topology state immediately
arm64: topology: Add support for topology DT bindings
arm64: topology: Tell the scheduler about the relative power of cores
arm64: topology: Provide relative power numbers for cores
Zi Shen Lim (2):
arm64: sched: Remove unused mc_capable() and smt_capable()
arm64: topology: add MPIDR-based detection
arch/arm64/include/asm/cputype.h | 5 +
arch/arm64/include/asm/topology.h | 3 -
arch/arm64/kernel/topology.c | 411 ++++++++++++++++++++++++++++++++++++--
3 files changed, 401 insertions(+), 18 deletions(-)
--
1.9.2
This is third attempt to initialize CPU's OPPs from CPU core code. First two are
here: https://lkml.org/lkml/2014/5/19/57 and https://lkml.org/lkml/2014/5/21/199
Drivers expecting CPU's OPPs from device tree initialize OPP table themselves by
calling of_init_opp_table() and there is nothing driver specific in that. They
all do it in the same redundant way.
It would be better if we can get rid of redundancy by initializing CPU OPPs from
CPU core code for all CPUs (that have a "operating-points" property defined in
their node).
This patchset is all about that.
The idea was initially discussed here: https://lkml.org/lkml/2014/5/17/123
V2->V3:
- s/dev_info/dev_dbg
- Fixed changelogs
V1->V2:
- Addition of two new patches: 1/2 & 2/2
- Created separate routine of_init_cpu_opp_table() which wouldn't add any
overhead for the platforms which don't have OPP or OF enabled.
- Added a print for success case as well
- Added Acks from Shawn
- Got rid of extra indentation level by returning early from register_cpu().
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Amit Daniel Kachhap <amit.daniel(a)samsung.com>
Cc: Kukjin Kim <kgene.kim(a)samsung.com>
Cc: Shawn Guo <shawn.guo(a)linaro.org>
Viresh Kumar (8):
cpufreq: cpufreq-cpu0: remove dependency on thermal
opp: of_init_opp_table(): return -ENOSYS when feature isn't
implemented
opp: call of_node_{get|put}() from of_init_opp_table()
driver/core: cpu: initialize opp table
cpufreq: arm_big_little: don't initialize opp table
cpufreq: imx6q: don't initialize opp table
cpufreq: cpufreq-cpu0: don't initialize opp table
cpufreq: exynos5440: don't initialize opp table
arch/arm/mach-imx/mach-imx6q.c | 36 ++++++++----------------------------
drivers/base/cpu.c | 30 ++++++++++++++++++++++++++----
drivers/base/power/opp.c | 4 ++++
drivers/cpufreq/Kconfig | 2 +-
drivers/cpufreq/arm_big_little.c | 12 +++++++-----
drivers/cpufreq/arm_big_little_dt.c | 18 ------------------
drivers/cpufreq/cpufreq-cpu0.c | 6 ------
drivers/cpufreq/exynos5440-cpufreq.c | 6 ------
drivers/cpufreq/imx6q-cpufreq.c | 20 +-------------------
include/linux/pm_opp.h | 2 +-
10 files changed, 48 insertions(+), 88 deletions(-)
--
2.0.0.rc2
Dear all,
There's a question in the arch/arm64/kernel/entry.S as following,
/*
* EL1 mode handlers.
*/
el1_sync:
kernel_entry 1
mrs x1, esr_el1 // read the syndrome register
lsr x24, x1, #ESR_EL1_EC_SHIFT // exception class
cmp x24, #ESR_EL1_EC_DABT_EL1 // data abort in EL1
b.eq el1_da
cmp x24, #ESR_EL1_EC_SYS64 // configurable trap
b.eq el1_undef
cmp x24, #ESR_EL1_EC_SP_ALIGN // stack alignment exception
b.eq el1_sp_pc
el1_sp_pc:
/*
* Stack or PC alignment exception handling
*/
mrs x0, far_el1
- mov x1, x25 ==> this is an extra operation
mov x2, sp
b do_sp_pc_abort //Jump to C Exception handler
/**The C Exception Handler/
asmlinkage void __exception do_sp_pc_abort(unsigned long addr,
unsigned int esr,
struct pt_regs *regs)
{
...
}
We use x1 register to store the value of ESR, and check the value to
identify which exception handler to jump,
And there's a weird part In stack alignment exception handler(el1_sp_pc),
Why do we need to move x25 to x1?
The ESR has been stored into x1, and should be directly pass to
do_sp_pc_abort function
"MOV x1, x25" is an extra operation and do_sp_pc_abort would get the wrong
value of esr...
I'm not sure whether I'm right or not, hope someone can take a look at it,
thx
BRs
andy