Skip to content

stretch-openmediavault-rockpro64

Verschoben Linux

  • linux-mainline-u-boot

    Angeheftet Images
    2
    0 Stimmen
    2 Beiträge
    353 Aufrufe
    FrankMF

    2020.01-ayufan-2014-gff2cdd38 released

    ayufan: rockchip: allow to boot scsi4, as JMS585 can have 5 drives
  • 0 Stimmen
    1 Beiträge
    358 Aufrufe
    Niemand hat geantwortet
  • ROCKPro64 - USB-C -> HDMi

    ROCKPro64
    3
    0 Stimmen
    3 Beiträge
    407 Aufrufe
    FrankMF

    @hannescam Hallo! Das ist ja schon ein paar Tage her, gut das wir den Screenshot haben. Du könntest genau diese Kernel-Version vom Kamil suchen und benutzen. Da musste man kein Linux Held sein, Kable einstecken - Bild da.

    Ob das mit was Aktuellerem geht, weiß ich nicht. Debian kann man ja so installieren, wie findest Du hier im Forum. Ob Debian die USB-C Schnittstelle nutzt weiß ich nicht. muss man ausprobieren.

    Da für mich die Platinen immer nur ohne Desktop Sinn gemacht haben, habe ich so was immer nur ganz kurz angetestet. Nutze die SOCs eigentlich ausschließlich Headless.

  • 0 Stimmen
    2 Beiträge
    1k Aufrufe
    FrankMF

    This repo contains the tn40xx Linux driver for 10Gbit NICs based on the TN4010 MAC from Tehuti Networks.

    This driver enables the following 10Gb SFP+ NICs:

    D-Link DXE-810S
    Edimax EN-9320SFP+
    StarTech PEX10000SFP
    Synology E10G15-F1
    ... as well as the following 10GBase-T/NBASE-T NICs:

    D-Link DXE-810T
    Edimax EN-9320TX-E
    EXSYS EX-6061-2
    Intellinet 507950
    StarTech ST10GSPEXNB

    Quelle: https://github.com/ayufan-rock64/tn40xx-driver/tree/master

  • Eure Meinung zum ROCKPro64 ?

    ROCKPro64
    1
    0 Stimmen
    1 Beiträge
    570 Aufrufe
    Niemand hat geantwortet
  • Bionic Minimal 0.7.8

    ROCKPro64
    2
    0 Stimmen
    2 Beiträge
    554 Aufrufe
    FrankMF

    Testin Testing

  • ROCKPro64 wieder vorbestellbar

    ROCKPro64
    5
    0 Stimmen
    5 Beiträge
    979 Aufrufe
    FrankMF

    Meine Lieferung ist unterwegs 🙂

    Hello Mr. Frank Mankel, Order 62068 just shipped on July 18, 2018 from Shenzhen transit to Hong Kong DHL.

  • stretch-minimal-rockpro64

    Verschoben Linux
    3
    0 Stimmen
    3 Beiträge
    981 Aufrufe
    FrankMF

    Mal ein Test was der Speicher so kann.

    rock64@rockpro64:~/tinymembench$ ./tinymembench tinymembench v0.4.9 (simple benchmark for memory throughput and latency) ========================================================================== == Memory bandwidth tests == == == == Note 1: 1MB = 1000000 bytes == == Note 2: Results for 'copy' tests show how many bytes can be == == copied per second (adding together read and writen == == bytes would have provided twice higher numbers) == == Note 3: 2-pass copy means that we are using a small temporary buffer == == to first fetch data into it, and only then write it to the == == destination (source -> L1 cache, L1 cache -> destination) == == Note 4: If sample standard deviation exceeds 0.1%, it is shown in == == brackets == ========================================================================== C copy backwards : 2812.7 MB/s C copy backwards (32 byte blocks) : 2811.9 MB/s C copy backwards (64 byte blocks) : 2632.8 MB/s C copy : 2667.2 MB/s C copy prefetched (32 bytes step) : 2633.5 MB/s C copy prefetched (64 bytes step) : 2640.8 MB/s C 2-pass copy : 2509.8 MB/s C 2-pass copy prefetched (32 bytes step) : 2431.6 MB/s C 2-pass copy prefetched (64 bytes step) : 2424.1 MB/s C fill : 4887.7 MB/s (0.5%) C fill (shuffle within 16 byte blocks) : 4883.0 MB/s C fill (shuffle within 32 byte blocks) : 4889.3 MB/s C fill (shuffle within 64 byte blocks) : 4889.2 MB/s --- standard memcpy : 2807.3 MB/s standard memset : 4890.4 MB/s (0.3%) --- NEON LDP/STP copy : 2803.7 MB/s NEON LDP/STP copy pldl2strm (32 bytes step) : 2802.1 MB/s NEON LDP/STP copy pldl2strm (64 bytes step) : 2800.7 MB/s NEON LDP/STP copy pldl1keep (32 bytes step) : 2745.5 MB/s NEON LDP/STP copy pldl1keep (64 bytes step) : 2745.8 MB/s NEON LD1/ST1 copy : 2801.9 MB/s NEON STP fill : 4888.9 MB/s (0.3%) NEON STNP fill : 4850.1 MB/s ARM LDP/STP copy : 2803.8 MB/s ARM STP fill : 4893.0 MB/s (0.5%) ARM STNP fill : 4851.7 MB/s ========================================================================== == Framebuffer read tests. == == == == Many ARM devices use a part of the system memory as the framebuffer, == == typically mapped as uncached but with write-combining enabled. == == Writes to such framebuffers are quite fast, but reads are much == == slower and very sensitive to the alignment and the selection of == == CPU instructions which are used for accessing memory. == == == == Many x86 systems allocate the framebuffer in the GPU memory, == == accessible for the CPU via a relatively slow PCI-E bus. Moreover, == == PCI-E is asymmetric and handles reads a lot worse than writes. == == == == If uncached framebuffer reads are reasonably fast (at least 100 MB/s == == or preferably >300 MB/s), then using the shadow framebuffer layer == == is not necessary in Xorg DDX drivers, resulting in a nice overall == == performance improvement. For example, the xf86-video-fbturbo DDX == == uses this trick. == ========================================================================== NEON LDP/STP copy (from framebuffer) : 602.5 MB/s NEON LDP/STP 2-pass copy (from framebuffer) : 551.6 MB/s NEON LD1/ST1 copy (from framebuffer) : 667.1 MB/s NEON LD1/ST1 2-pass copy (from framebuffer) : 605.6 MB/s ARM LDP/STP copy (from framebuffer) : 445.3 MB/s ARM LDP/STP 2-pass copy (from framebuffer) : 428.8 MB/s ========================================================================== == Memory latency test == == == == Average time is measured for random memory accesses in the buffers == == of different sizes. The larger is the buffer, the more significant == == are relative contributions of TLB, L1/L2 cache misses and SDRAM == == accesses. For extremely large buffer sizes we are expecting to see == == page table walk with several requests to SDRAM for almost every == == memory access (though 64MiB is not nearly large enough to experience == == this effect to its fullest). == == == == Note 1: All the numbers are representing extra time, which needs to == == be added to L1 cache latency. The cycle timings for L1 cache == == latency can be usually found in the processor documentation. == == Note 2: Dual random read means that we are simultaneously performing == == two independent memory accesses at a time. In the case if == == the memory subsystem can't handle multiple outstanding == == requests, dual random read has the same timings as two == == single reads performed one after another. == ========================================================================== block size : single random read / dual random read 1024 : 0.0 ns / 0.0 ns 2048 : 0.0 ns / 0.0 ns 4096 : 0.0 ns / 0.0 ns 8192 : 0.0 ns / 0.0 ns 16384 : 0.0 ns / 0.0 ns 32768 : 0.0 ns / 0.0 ns 65536 : 4.5 ns / 7.2 ns 131072 : 6.8 ns / 9.7 ns 262144 : 9.8 ns / 12.8 ns 524288 : 11.4 ns / 14.7 ns 1048576 : 16.0 ns / 22.6 ns 2097152 : 114.0 ns / 175.3 ns 4194304 : 161.7 ns / 219.9 ns 8388608 : 190.7 ns / 241.5 ns 16777216 : 205.3 ns / 250.5 ns 33554432 : 212.9 ns / 255.5 ns 67108864 : 222.3 ns / 271.1 ns