Skip to content

stretch-openmediavault-rockpro64

Verschoben Linux

  • Mainline 5.11.x

    Images
    1
    0 Stimmen
    1 Beiträge
    218 Aufrufe
    Niemand hat geantwortet
  • 0 Stimmen
    1 Beiträge
    355 Aufrufe
    Niemand hat geantwortet
  • Image 0.9.14 - Kurztest

    ROCKPro64
    1
    0 Stimmen
    1 Beiträge
    216 Aufrufe
    Niemand hat geantwortet
  • 0 Stimmen
    2 Beiträge
    890 Aufrufe
    FrankMF

    Das Resize-Problem der Partition, nachdem man das System auf einer USB3-HDD installiert hat, ist in

    Welcome to ARMBIAN 5.67.181217 nightly Debian GNU/Linux 9 (stretch) 4.4.167-rockchip64

    gefixt. Eine echte Verbesserung!

  • ROCKPro64 Armbian Image - erster Test

    Verschoben Armbian
    13
    0 Stimmen
    13 Beiträge
    2k Aufrufe
    FrankMF

    Erster dicker Fehlschlag mit Armbian 😞

    Heute versucht mein NAS mit Armbian aufzusetzen. Raid einbinden usw. kein Problem. Als es dann an Restic und GO ging war es vorbei mit lustig. Pakete zu alt, Quellen eingebunden und nur noch Fehler. Hmm!?

    Da ich nach zwei Stunden keine Lust mehr hatte, habe ich das erst mal auf Eis gelegt. Manchmal ist es besser an einem anderen Tag noch mal von vorne anzufangen.

    Nun läuft das NAS wieder mit

    rock64@rockpro64v_2_1:~$ uname -a Linux rockpro64v_2_1 4.19.0-rc4-1071-ayufan-g10a63ec6c2a2 #1 SMP PREEMPT Mon Oct 1 07:33:40 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux

    So schlecht läuft das ja nicht, wenn denn mal die USB3 Schnittstelle vernünftig laufen würde.

    Update: Manchmal muss man es auch richtig machen 🙂 https://forum.frank-mankel.org/topic/420/rockpro64-armbian-go-restic-installieren

  • Image 0.7.8 - Latest release

    ROCKPro64
    1
    0 Stimmen
    1 Beiträge
    535 Aufrufe
    Niemand hat geantwortet
  • Mainline Commits

    ROCKPro64
    2
    0 Stimmen
    2 Beiträge
    661 Aufrufe
    FrankMF

    Mal diesen alten Thread wieder ausgraben.

    Qualcomm Snapdragon 835 SoC support along with the HiSilicon Hi3670, many NVIDIA Tegra improvements, GTA04A5 phone support, and more. There is also now mainline ARM SBC support for the Orange Pi Zero Plus2, Orange Pi One Plus, Pine64 LTS, Banana Pi M2+ H5, 64-bit Banana Pi M2+ H3, ASUS Tinker Board S, RockPro64, Rock960, and ROC-RK-3399-PC.
    Quelle: https://www.phoronix.com/scan.php?page=article&item=linux-420-features&num=1

    Im Pine64 Forum gefunden.

  • stretch-minimal-rockpro64

    Verschoben Linux
    3
    0 Stimmen
    3 Beiträge
    978 Aufrufe
    FrankMF

    Mal ein Test was der Speicher so kann.

    rock64@rockpro64:~/tinymembench$ ./tinymembench tinymembench v0.4.9 (simple benchmark for memory throughput and latency) ========================================================================== == Memory bandwidth tests == == == == Note 1: 1MB = 1000000 bytes == == Note 2: Results for 'copy' tests show how many bytes can be == == copied per second (adding together read and writen == == bytes would have provided twice higher numbers) == == Note 3: 2-pass copy means that we are using a small temporary buffer == == to first fetch data into it, and only then write it to the == == destination (source -> L1 cache, L1 cache -> destination) == == Note 4: If sample standard deviation exceeds 0.1%, it is shown in == == brackets == ========================================================================== C copy backwards : 2812.7 MB/s C copy backwards (32 byte blocks) : 2811.9 MB/s C copy backwards (64 byte blocks) : 2632.8 MB/s C copy : 2667.2 MB/s C copy prefetched (32 bytes step) : 2633.5 MB/s C copy prefetched (64 bytes step) : 2640.8 MB/s C 2-pass copy : 2509.8 MB/s C 2-pass copy prefetched (32 bytes step) : 2431.6 MB/s C 2-pass copy prefetched (64 bytes step) : 2424.1 MB/s C fill : 4887.7 MB/s (0.5%) C fill (shuffle within 16 byte blocks) : 4883.0 MB/s C fill (shuffle within 32 byte blocks) : 4889.3 MB/s C fill (shuffle within 64 byte blocks) : 4889.2 MB/s --- standard memcpy : 2807.3 MB/s standard memset : 4890.4 MB/s (0.3%) --- NEON LDP/STP copy : 2803.7 MB/s NEON LDP/STP copy pldl2strm (32 bytes step) : 2802.1 MB/s NEON LDP/STP copy pldl2strm (64 bytes step) : 2800.7 MB/s NEON LDP/STP copy pldl1keep (32 bytes step) : 2745.5 MB/s NEON LDP/STP copy pldl1keep (64 bytes step) : 2745.8 MB/s NEON LD1/ST1 copy : 2801.9 MB/s NEON STP fill : 4888.9 MB/s (0.3%) NEON STNP fill : 4850.1 MB/s ARM LDP/STP copy : 2803.8 MB/s ARM STP fill : 4893.0 MB/s (0.5%) ARM STNP fill : 4851.7 MB/s ========================================================================== == Framebuffer read tests. == == == == Many ARM devices use a part of the system memory as the framebuffer, == == typically mapped as uncached but with write-combining enabled. == == Writes to such framebuffers are quite fast, but reads are much == == slower and very sensitive to the alignment and the selection of == == CPU instructions which are used for accessing memory. == == == == Many x86 systems allocate the framebuffer in the GPU memory, == == accessible for the CPU via a relatively slow PCI-E bus. Moreover, == == PCI-E is asymmetric and handles reads a lot worse than writes. == == == == If uncached framebuffer reads are reasonably fast (at least 100 MB/s == == or preferably >300 MB/s), then using the shadow framebuffer layer == == is not necessary in Xorg DDX drivers, resulting in a nice overall == == performance improvement. For example, the xf86-video-fbturbo DDX == == uses this trick. == ========================================================================== NEON LDP/STP copy (from framebuffer) : 602.5 MB/s NEON LDP/STP 2-pass copy (from framebuffer) : 551.6 MB/s NEON LD1/ST1 copy (from framebuffer) : 667.1 MB/s NEON LD1/ST1 2-pass copy (from framebuffer) : 605.6 MB/s ARM LDP/STP copy (from framebuffer) : 445.3 MB/s ARM LDP/STP 2-pass copy (from framebuffer) : 428.8 MB/s ========================================================================== == Memory latency test == == == == Average time is measured for random memory accesses in the buffers == == of different sizes. The larger is the buffer, the more significant == == are relative contributions of TLB, L1/L2 cache misses and SDRAM == == accesses. For extremely large buffer sizes we are expecting to see == == page table walk with several requests to SDRAM for almost every == == memory access (though 64MiB is not nearly large enough to experience == == this effect to its fullest). == == == == Note 1: All the numbers are representing extra time, which needs to == == be added to L1 cache latency. The cycle timings for L1 cache == == latency can be usually found in the processor documentation. == == Note 2: Dual random read means that we are simultaneously performing == == two independent memory accesses at a time. In the case if == == the memory subsystem can't handle multiple outstanding == == requests, dual random read has the same timings as two == == single reads performed one after another. == ========================================================================== block size : single random read / dual random read 1024 : 0.0 ns / 0.0 ns 2048 : 0.0 ns / 0.0 ns 4096 : 0.0 ns / 0.0 ns 8192 : 0.0 ns / 0.0 ns 16384 : 0.0 ns / 0.0 ns 32768 : 0.0 ns / 0.0 ns 65536 : 4.5 ns / 7.2 ns 131072 : 6.8 ns / 9.7 ns 262144 : 9.8 ns / 12.8 ns 524288 : 11.4 ns / 14.7 ns 1048576 : 16.0 ns / 22.6 ns 2097152 : 114.0 ns / 175.3 ns 4194304 : 161.7 ns / 219.9 ns 8388608 : 190.7 ns / 241.5 ns 16777216 : 205.3 ns / 250.5 ns 33554432 : 212.9 ns / 255.5 ns 67108864 : 222.3 ns / 271.1 ns