[{"data":1,"prerenderedAt":2871},["ShallowReactive",2],{"post-\u002Fblog\u002F2023\u002Ftaming-the-cephodian-octopus-or-quincy":3},{"id":4,"title":5,"body":6,"categories":2849,"date":2855,"description":41,"extension":2856,"image":2857,"meta":2858,"navigation":255,"path":2867,"seo":2868,"stem":2869,"__hash__":2870},"blog\u002Fblog\u002F2023\u002Ftaming-the-cephodian-octopus-or-quincy.md","Taming The Cephodian Octopus - Reef",{"type":7,"value":8,"toc":2826},"minimark",[9,13,16,33,36,43,48,51,56,77,82,96,99,154,159,166,170,173,339,348,420,423,497,500,508,512,515,665,668,725,728,751,755,758,763,784,787,790,873,876,900,903,906,909,961,964,969,972,977,980,985,992,997,1008,1011,1041,1044,1051,1054,1059,1064,1067,1070,1073,1077,1102,1106,1110,1113,1116,1124,1127,1130,1136,1143,1326,1329,1395,1406,1410,1413,1418,1425,1430,1433,1438,1445,1489,1492,1496,1503,1506,1647,1661,1665,1678,1681,1715,1723,1729,1733,1737,1740,1746,1753,1757,1763,1769,1772,1778,1781,1792,1799,1803,1806,1812,1819,1825,1829,1833,1844,1847,1853,1856,1859,1865,1868,1874,1877,1883,1886,1892,1896,1899,1905,1908,1914,1917,1923,1926,1932,1936,1939,1945,1949,1952,1956,1967,1975,1979,1982,1985,2017,2020,2025,2028,2031,2073,2080,2122,2129,2132,2201,2204,2383,2386,2484,2503,2801,2804,2808,2822],[10,11,12],"p",{},"My Ceph Cluster runs now! And it is amizingly powerful :-)",[10,14,15],{},"Updates for Ceph Reef. Quincy is not yet the latest release anymore, I reinstalled my cluster with Reef (now using 6 Odroid HC4s each with a single 4TB disk) and updated this blog post.",[10,17,18,19,26,27,32],{},"Quite some time has passed since my ",[20,21,25],"a",{"href":22,"rel":23},"https:\u002F\u002Fthe78mole.de\u002Fhow-to-build-a-private-storage-cluster-with-ceph\u002F",[24],"nofollow","last tries to get a ceph cluster running on ARM"," and ",[20,28,31],{"href":29,"rel":30},"https:\u002F\u002Fthe78mole.de\u002Fcompile-ceph-mimic-on-arm-32-bit\u002F",[24],"compiling it on 32-bit ARM",". But with every unsolved problem, moles are not well known to forget about unfinished tunneling projects. It's again time to blast the solid rock with some pieces of dynamite :-) Ah, wrong project... we are under water not below ground... (do not get confused... octopus was a release of ceph and v17.2 aka. quincy is the latest version as of writing this)",[10,34,35],{},"I purchased three ODROID HC4 (the P-Kit) quite some weeks ago. It has a 64 bit ARM core, 4 GB of RAM and two slots for hard drives. And the drives I bought some years ago (4TB WD Red Standard) are still not continuously used and I could get two of them for use in my cluster. They just served to keep some data temporarly. I bought now another four pieces of WD Red (this time the Pro) and assembled it with the two remaining HC4-P Kits. So I have now three pieces of HC4 containing six WD Red hard drives in total, which means 24 TB of raw capacity.",[10,37,38],{},[39,40],"img",{"alt":41,"src":42},"","\u002Fimages\u002Fblog\u002F2023\u002F04\u002Fimage-22-1024x927.png",[10,44,45],{},[39,46],{"alt":41,"src":47},"\u002Fimages\u002Fblog\u002F2023\u002F04\u002Fimage-23-1024x576.png",[10,49,50],{},"As you may guessed already, installing ceph is still not an easy task. You can often not simply stick to the manual and this is, why I started writing again. In fact, I took two long ways without success out of many more wrong ways, that ended very quickly. And here is the way, that was successful, with only minor headaches to crush.",[52,53,55],"h1",{"id":54},"installation","Installation",[10,57,58,59,64,65,70,71,76],{},"After trying different things (original HC4, debian bullsey stable and unstable), I got over to try ",[20,60,63],{"href":61,"rel":62},"https:\u002F\u002Fwww.armbian.com\u002Fodroid-hc4\u002F",[24],"armbian for ODROID HC4"," (",[20,66,69],{"href":67,"rel":68},"https:\u002F\u002Fdl.armbian.com\u002Fodroidhc4\u002Farchive\u002FArmbian_23.11.1_Odroidhc4_bookworm_current_6.1.63.img.xz",[24],"Bookworm CLI"," as of Nov, 30th 2023). It is based on debian and provides recent updates (currently containing ceph 16.2.11, but ceph provides latest packages for debian bookworm). Flashing is easily done with ",[20,72,75],{"href":73,"rel":74},"https:\u002F\u002Fwww.balena.io\u002Fetcher#download-etcher",[24],"Balena Etcher",".",[78,79,81],"h2",{"id":80},"hardware-preparation","Hardware Preparation",[10,83,84,85,90,91,95],{},"But to get it running, you need to get rid of petitboot, preinstalled on HC4 SPI flash. The instructions mentioned on the armbian page were not working. With latest kernel, the MTD devices do not show up any longer in the booted system (reached by holding the bottom button of HC4 and powering on) and also petitboot did not show up when a monitor was connected. I don't know exactly why, but I soon took out the screws from the case and connected the UART (115200 8N1). Fortunately, I had an original ",[20,86,89],{"href":87,"rel":88},"https:\u002F\u002Fwww.hardkernel.com\u002Fshop\u002Fusb-uart-2-module-kit-copy\u002F",[24],"ODROID USB-serial converter (aka."," ",[20,92,94],{"href":87,"rel":93},[24],"USB-UART 2 Module Kit",") at hand.",[10,97,98],{},"After power up, a minimal system presented itself on the console (most obviously, this is petitboot). So i issued the commands to erase the SPI flash:",[100,101,105],"pre",{"className":102,"code":103,"language":104,"meta":41,"style":41},"language-bash shiki shiki-themes github-light github-dark","$ flash_eraseall \u002Fdev\u002Fmtd0\n$ flash_eraseall \u002Fdev\u002Fmtd1\n$ flash_eraseall \u002Fdev\u002Fmtd2\n$ flash_eraseall \u002Fdev\u002Fmtd3\n","bash",[106,107,108,124,134,144],"code",{"__ignoreMap":41},[109,110,113,117,121],"span",{"class":111,"line":112},"line",1,[109,114,116],{"class":115},"sScJk","$",[109,118,120],{"class":119},"sZZnC"," flash_eraseall",[109,122,123],{"class":119}," \u002Fdev\u002Fmtd0\n",[109,125,127,129,131],{"class":111,"line":126},2,[109,128,116],{"class":115},[109,130,120],{"class":119},[109,132,133],{"class":119}," \u002Fdev\u002Fmtd1\n",[109,135,137,139,141],{"class":111,"line":136},3,[109,138,116],{"class":115},[109,140,120],{"class":119},[109,142,143],{"class":119}," \u002Fdev\u002Fmtd2\n",[109,145,147,149,151],{"class":111,"line":146},4,[109,148,116],{"class":115},[109,150,120],{"class":119},[109,152,153],{"class":119}," \u002Fdev\u002Fmtd3\n",[10,155,156],{},[39,157],{"alt":41,"src":158},"\u002Fimages\u002Fblog\u002F2023\u002F04\u002Fimage-21.png",[10,160,161,162,165],{},"This took (especially for \u002Fdev\u002Fmtd3) a few minutes... After re-powering the board, all went fine and the system from SD-Card came up. Keep the serial console connected or log in via SSH (",[106,163,164],{},"user:root\u002Fpass:1234",") and follow the initial installation wizard, initializing username, passwords, default shell and locales.",[78,167,169],{"id":168},"prepare-the-linux-system","Prepare the Linux System",[10,171,172],{},"Now update the system, change the hostname and reboot:",[100,174,176],{"className":102,"code":175,"language":104,"meta":41,"style":41},"$ apt update\n$ apt upgrade -y\n$ apt install -y vim\n$ hostname \u003CYOUR_HOSTNAME>\n$ hostname > \u002Fetc\u002Fhostname\n\n### On the HC4\n$ sed -i \"s\u002Fodroidhc4\u002F$(hostname)\u002F\" \u002Fetc\u002Fhosts\n### On a pi4b\n$ sed -i \"s\u002Fpi4b\u002F$(hostname)\u002F\" \u002Fetc\u002Fhosts\n\n$ sed -i \"s\u002F^.*\\(SIZE\\)=.*$\u002F\\1=256M\u002F\" \u002Fetc\u002Fdefault\u002Farmbian-ramlog\n$ reboot\n",[106,177,178,188,201,216,237,250,257,264,287,293,311,316,331],{"__ignoreMap":41},[109,179,180,182,185],{"class":111,"line":112},[109,181,116],{"class":115},[109,183,184],{"class":119}," apt",[109,186,187],{"class":119}," update\n",[109,189,190,192,194,197],{"class":111,"line":126},[109,191,116],{"class":115},[109,193,184],{"class":119},[109,195,196],{"class":119}," upgrade",[109,198,200],{"class":199},"sj4cs"," -y\n",[109,202,203,205,207,210,213],{"class":111,"line":136},[109,204,116],{"class":115},[109,206,184],{"class":119},[109,208,209],{"class":119}," install",[109,211,212],{"class":199}," -y",[109,214,215],{"class":119}," vim\n",[109,217,218,220,223,227,230,234],{"class":111,"line":146},[109,219,116],{"class":115},[109,221,222],{"class":119}," hostname",[109,224,226],{"class":225},"szBVR"," \u003C",[109,228,229],{"class":119},"YOUR_HOSTNAM",[109,231,233],{"class":232},"sVt8B","E",[109,235,236],{"class":225},">\n",[109,238,240,242,244,247],{"class":111,"line":239},5,[109,241,116],{"class":115},[109,243,222],{"class":119},[109,245,246],{"class":225}," >",[109,248,249],{"class":119}," \u002Fetc\u002Fhostname\n",[109,251,253],{"class":111,"line":252},6,[109,254,256],{"emptyLinePlaceholder":255},true,"\n",[109,258,260],{"class":111,"line":259},7,[109,261,263],{"class":262},"sJ8bj","### On the HC4\n",[109,265,267,269,272,275,278,281,284],{"class":111,"line":266},8,[109,268,116],{"class":115},[109,270,271],{"class":119}," sed",[109,273,274],{"class":199}," -i",[109,276,277],{"class":119}," \"s\u002Fodroidhc4\u002F$(",[109,279,280],{"class":115},"hostname",[109,282,283],{"class":119},")\u002F\"",[109,285,286],{"class":119}," \u002Fetc\u002Fhosts\n",[109,288,290],{"class":111,"line":289},9,[109,291,292],{"class":262},"### On a pi4b\n",[109,294,296,298,300,302,305,307,309],{"class":111,"line":295},10,[109,297,116],{"class":115},[109,299,271],{"class":119},[109,301,274],{"class":199},[109,303,304],{"class":119}," \"s\u002Fpi4b\u002F$(",[109,306,280],{"class":115},[109,308,283],{"class":119},[109,310,286],{"class":119},[109,312,314],{"class":111,"line":313},11,[109,315,256],{"emptyLinePlaceholder":255},[109,317,319,321,323,325,328],{"class":111,"line":318},12,[109,320,116],{"class":115},[109,322,271],{"class":119},[109,324,274],{"class":199},[109,326,327],{"class":119}," \"s\u002F^.*\\(SIZE\\)=.*$\u002F\\1=256M\u002F\"",[109,329,330],{"class":119}," \u002Fetc\u002Fdefault\u002Farmbian-ramlog\n",[109,332,334,336],{"class":111,"line":333},13,[109,335,116],{"class":115},[109,337,338],{"class":119}," reboot\n",[10,340,341,342,347],{},"For getting a nice status on OLED, you can easily install ",[20,343,346],{"href":344,"rel":345},"https:\u002F\u002Fgithub.com\u002Frpardini\u002Fsys-oled-hc4",[24],"sys-oled-hc4"," as a user with sudo permissions:",[100,349,351],{"className":102,"code":350,"language":104,"meta":41,"style":41},"$ git clone https:\u002F\u002Fgithub.com\u002Frpardini\u002Fsys-oled-hc4\n$ cd sys-oled-hc4\n$ sudo .\u002Finstall.sh\n$ sudo sed -i \"s\u002Feth0\u002Fend0\u002F\" \u002Fetc\u002Fsys-oled.conf\n# or\n$ vi \u002Fetc\u002Fsys-oled.conf   # Change network interface to end0\n",[106,352,353,366,376,386,402,407],{"__ignoreMap":41},[109,354,355,357,360,363],{"class":111,"line":112},[109,356,116],{"class":115},[109,358,359],{"class":119}," git",[109,361,362],{"class":119}," clone",[109,364,365],{"class":119}," https:\u002F\u002Fgithub.com\u002Frpardini\u002Fsys-oled-hc4\n",[109,367,368,370,373],{"class":111,"line":126},[109,369,116],{"class":115},[109,371,372],{"class":119}," cd",[109,374,375],{"class":119}," sys-oled-hc4\n",[109,377,378,380,383],{"class":111,"line":136},[109,379,116],{"class":115},[109,381,382],{"class":119}," sudo",[109,384,385],{"class":119}," .\u002Finstall.sh\n",[109,387,388,390,392,394,396,399],{"class":111,"line":146},[109,389,116],{"class":115},[109,391,382],{"class":119},[109,393,271],{"class":119},[109,395,274],{"class":199},[109,397,398],{"class":119}," \"s\u002Feth0\u002Fend0\u002F\"",[109,400,401],{"class":119}," \u002Fetc\u002Fsys-oled.conf\n",[109,403,404],{"class":111,"line":239},[109,405,406],{"class":262},"# or\n",[109,408,409,411,414,417],{"class":111,"line":252},[109,410,116],{"class":115},[109,412,413],{"class":119}," vi",[109,415,416],{"class":119}," \u002Fetc\u002Fsys-oled.conf",[109,418,419],{"class":262},"   # Change network interface to end0\n",[10,421,422],{},"To bring a bit of color into life (and to quickly see, if you are root or somebody else)",[100,424,426],{"className":102,"code":425,"language":104,"meta":41,"style":41},"$ sudo curl --silent \\\n  -o \u002Froot\u002F.bashrc \\\n  https:\u002F\u002Fraw.githubusercontent.com\u002Fthe78mole\u002Fthe78mole-snippets\u002Fmain\u002Fconfigs\u002F.bashrc_root\n$ curl --silent \\\n  -o ~\u002F.bashrc \\\n  https:\u002F\u002Fraw.githubusercontent.com\u002Fthe78mole\u002Fthe78mole-snippets\u002Fmain\u002Fconfigs\u002F.bashrc_user\n$ sudo cp ~\u002F.profile \u002Froot\u002F\n",[106,427,428,443,453,458,468,477,482],{"__ignoreMap":41},[109,429,430,432,434,437,440],{"class":111,"line":112},[109,431,116],{"class":115},[109,433,382],{"class":119},[109,435,436],{"class":119}," curl",[109,438,439],{"class":199}," --silent",[109,441,442],{"class":199}," \\\n",[109,444,445,448,451],{"class":111,"line":126},[109,446,447],{"class":199},"  -o",[109,449,450],{"class":119}," \u002Froot\u002F.bashrc",[109,452,442],{"class":199},[109,454,455],{"class":111,"line":136},[109,456,457],{"class":119},"  https:\u002F\u002Fraw.githubusercontent.com\u002Fthe78mole\u002Fthe78mole-snippets\u002Fmain\u002Fconfigs\u002F.bashrc_root\n",[109,459,460,462,464,466],{"class":111,"line":146},[109,461,116],{"class":115},[109,463,436],{"class":119},[109,465,439],{"class":199},[109,467,442],{"class":199},[109,469,470,472,475],{"class":111,"line":239},[109,471,447],{"class":199},[109,473,474],{"class":119}," ~\u002F.bashrc",[109,476,442],{"class":199},[109,478,479],{"class":111,"line":252},[109,480,481],{"class":119},"  https:\u002F\u002Fraw.githubusercontent.com\u002Fthe78mole\u002Fthe78mole-snippets\u002Fmain\u002Fconfigs\u002F.bashrc_user\n",[109,483,484,486,488,491,494],{"class":111,"line":259},[109,485,116],{"class":115},[109,487,382],{"class":119},[109,489,490],{"class":119}," cp",[109,492,493],{"class":119}," ~\u002F.profile",[109,495,496],{"class":119}," \u002Froot\u002F\n",[10,498,499],{},"Now continue installing the real ceph stuff :-)",[10,501,502,503,76],{},"Additionally I prepared a Rapspberry Pi 4 also with armbian CLI aka Bookworm... I'll use cephadm in the curl-installation version to deploy my cluster, sticking to the ",[20,504,507],{"href":505,"rel":506},"https:\u002F\u002Fdocs.ceph.com\u002Fen\u002Flatest\u002Fcephadm\u002Finstall\u002F#curl-based-installation",[24],"official documentation",[78,509,511],{"id":510},"install-ceph-on-the-master-node-rpi-4","Install Ceph on the Master Node (RPi 4)",[10,513,514],{},"On the master node, you should install a jammy CLI since the cephadm package is not existant in for this version of ceph and it will install v16 (precise) instead of reef.",[100,516,518],{"className":102,"code":517,"language":104,"meta":41,"style":41},"$ sudo apt install podman catatonit lvm2\n$ CEPH_RELEASE=18.2.1 # replace this with the active release\n$ curl --silent --remote-name --location https:\u002F\u002Fdownload.ceph.com\u002Frpm-${CEPH_RELEASE}\u002Fel9\u002Fnoarch\u002Fcephadm\n$ chmod +x cephadm\n$ sudo .\u002Fcephadm add-repo --release reef\n#### You can check, if ceph is the desired version with\n### apt search cephadm\n$ sudo .\u002Fcephadm install\n$ ip a                   # Wathc out for your IP address\n$ sudo cephadm bootstrap --mon-ip \u003CTHIS_SYSTEMS_NETWORK_IP>\n",[106,519,520,539,552,575,588,606,611,616,627,640],{"__ignoreMap":41},[109,521,522,524,526,528,530,533,536],{"class":111,"line":112},[109,523,116],{"class":115},[109,525,382],{"class":119},[109,527,184],{"class":119},[109,529,209],{"class":119},[109,531,532],{"class":119}," podman",[109,534,535],{"class":119}," catatonit",[109,537,538],{"class":119}," lvm2\n",[109,540,541,543,546,549],{"class":111,"line":126},[109,542,116],{"class":115},[109,544,545],{"class":119}," CEPH_RELEASE=",[109,547,548],{"class":199},"18.2.1",[109,550,551],{"class":262}," # replace this with the active release\n",[109,553,554,556,558,560,563,566,569,572],{"class":111,"line":136},[109,555,116],{"class":115},[109,557,436],{"class":119},[109,559,439],{"class":199},[109,561,562],{"class":199}," --remote-name",[109,564,565],{"class":199}," --location",[109,567,568],{"class":119}," https:\u002F\u002Fdownload.ceph.com\u002Frpm-",[109,570,571],{"class":232},"${CEPH_RELEASE}",[109,573,574],{"class":119},"\u002Fel9\u002Fnoarch\u002Fcephadm\n",[109,576,577,579,582,585],{"class":111,"line":146},[109,578,116],{"class":115},[109,580,581],{"class":119}," chmod",[109,583,584],{"class":119}," +x",[109,586,587],{"class":119}," cephadm\n",[109,589,590,592,594,597,600,603],{"class":111,"line":239},[109,591,116],{"class":115},[109,593,382],{"class":119},[109,595,596],{"class":119}," .\u002Fcephadm",[109,598,599],{"class":119}," add-repo",[109,601,602],{"class":199}," --release",[109,604,605],{"class":119}," reef\n",[109,607,608],{"class":111,"line":252},[109,609,610],{"class":262},"#### You can check, if ceph is the desired version with\n",[109,612,613],{"class":111,"line":259},[109,614,615],{"class":262},"### apt search cephadm\n",[109,617,618,620,622,624],{"class":111,"line":266},[109,619,116],{"class":115},[109,621,382],{"class":119},[109,623,596],{"class":119},[109,625,626],{"class":119}," install\n",[109,628,629,631,634,637],{"class":111,"line":289},[109,630,116],{"class":115},[109,632,633],{"class":119}," ip",[109,635,636],{"class":119}," a",[109,638,639],{"class":262},"                   # Wathc out for your IP address\n",[109,641,642,644,646,649,652,655,657,660,663],{"class":111,"line":295},[109,643,116],{"class":115},[109,645,382],{"class":119},[109,647,648],{"class":119}," cephadm",[109,650,651],{"class":119}," bootstrap",[109,653,654],{"class":199}," --mon-ip",[109,656,226],{"class":225},[109,658,659],{"class":119},"THIS_SYSTEMS_NETWORK_I",[109,661,662],{"class":232},"P",[109,664,236],{"class":225},[10,666,667],{},"After bootstrap, open a ceph shell and add a new administrative user (do not use admin, it is already used, disable it later).",[100,669,671],{"className":102,"code":670,"language":104,"meta":41,"style":41},"$ cephadm shell\n$ vi passwdfile.txt   # Enter your new password there\n$ ceph dashboard ac-user-create \u003CUSERNAME> -i ceph-adm-passwd.txt administrator\n",[106,672,673,682,694],{"__ignoreMap":41},[109,674,675,677,679],{"class":111,"line":112},[109,676,116],{"class":115},[109,678,648],{"class":119},[109,680,681],{"class":119}," shell\n",[109,683,684,686,688,691],{"class":111,"line":126},[109,685,116],{"class":115},[109,687,413],{"class":119},[109,689,690],{"class":119}," passwdfile.txt",[109,692,693],{"class":262},"   # Enter your new password there\n",[109,695,696,698,701,704,707,709,712,714,717,719,722],{"class":111,"line":136},[109,697,116],{"class":115},[109,699,700],{"class":119}," ceph",[109,702,703],{"class":119}," dashboard",[109,705,706],{"class":119}," ac-user-create",[109,708,226],{"class":225},[109,710,711],{"class":119},"USERNAM",[109,713,233],{"class":232},[109,715,716],{"class":225},">",[109,718,274],{"class":199},[109,720,721],{"class":119}," ceph-adm-passwd.txt",[109,723,724],{"class":119}," administrator\n",[10,726,727],{},"On every host you want to include in your cluster, you need to install following packages (I did this all with ansible, maybe I'll write some post about it in the future -> Leave me a comment if you are interested):",[100,729,731],{"className":102,"code":730,"language":104,"meta":41,"style":41},"$ apt install podman catatonit lvm2 gdisk \n",[106,732,733],{"__ignoreMap":41},[109,734,735,737,739,741,743,745,748],{"class":111,"line":112},[109,736,116],{"class":115},[109,738,184],{"class":119},[109,740,209],{"class":119},[109,742,532],{"class":119},[109,744,535],{"class":119},[109,746,747],{"class":119}," lvm2",[109,749,750],{"class":119}," gdisk\n",[78,752,754],{"id":753},"installing-ceph-using-cephadm","Installing Ceph using cephadm",[10,756,757],{},"Cephadm helps a lot to bootstrap a ceph cluster by preparing the host (also the remote hosts) from one single point in your network\u002Fcluster. What I learned painfully is, if you have a mixed architecture (arm64\u002Faarch64 and amd64\u002Fx86_64), you should not start deploying your cluster from some amd64 machine. To read, why this is the case, just drop down the accordeon:",[759,760,762],"h3",{"id":761},"why-a-ceph-roll-out-fails-when-starting-on-amd64-klick-if-you-want-to-read-more","Why a ceph roll-out fails when starting on amd64 (klick if you want to read more)",[10,764,765,766,771,772,777,778,783],{},"When starting from amd64, it will install perfectly on your amd64 hosts and will also install some of the services (docker\u002Fpodman containers) on your arm64, but also a few services will just fail. A first look in their ",[20,767,770],{"href":768,"rel":769},"https:\u002F\u002Fquay.io\u002Frepository\u002Fceph\u002Fceph?tab=tags&tag=v17",[24],"quay repo"," revealed, that it has images for arm64 and amd64. But if you dig deeper, you can see, that ",[20,773,776],{"href":774,"rel":775},"https:\u002F\u002Fgithub.com\u002Fceph\u002Fceph-container\u002Factions\u002Fworkflows\u002Fcontainer_arm64.yml",[24],"arm64 build actions"," failed on GitHub. Applied to our case, this could mean, that arm64 machines will build their own images, when starting roll-out on arm64, but they will receive a wrong image hash, when starting cephadm on amd64. If you go back in history of the ceph container builds, you can see, that 2 years ago, the build was working (last ceph version on docker hub is 16.2.5), just before they switched hosting their repos from ",[20,779,782],{"href":780,"rel":781},"https:\u002F\u002Fhub.docker.com\u002Fr\u002Fceph\u002Fceph",[24],"docker hub"," to quay.io. I believe, this is because red hat took over the ceph and quay.io is a red hat product. I somewhere read, that quay can not host images for different architectures in parallel, but I stopped digging here...",[10,785,786],{},"It seems, if you just start deployment from some arm64 machine, it works like a charm :-).",[10,788,789],{},"As a preparation for your deployment, you are well advised to distribute the SSH pubkey to all hosts you want to include in your cluster. This is quite easy... We will use the root user and an ssh-key without password, since it makes things way easier...",[100,791,793],{"className":102,"code":792,"language":104,"meta":41,"style":41},"$ sudo -i\n$ ssh-keygen -t ed25519   # No password, just press enter two times\n### If you want a password protected key, have a look at keychain that makes\n### managing ssh keys with an ssh-agent easy\n\n$ ssh-copy-id root@\u003CCLUSTER_HOST1>\n$ ssh-copy-id root@\u003CCLUSTER_HOST2>\n...\n",[106,794,795,804,820,825,830,834,853,868],{"__ignoreMap":41},[109,796,797,799,801],{"class":111,"line":112},[109,798,116],{"class":115},[109,800,382],{"class":119},[109,802,803],{"class":199}," -i\n",[109,805,806,808,811,814,817],{"class":111,"line":126},[109,807,116],{"class":115},[109,809,810],{"class":119}," ssh-keygen",[109,812,813],{"class":199}," -t",[109,815,816],{"class":119}," ed25519",[109,818,819],{"class":262},"   # No password, just press enter two times\n",[109,821,822],{"class":111,"line":136},[109,823,824],{"class":262},"### If you want a password protected key, have a look at keychain that makes\n",[109,826,827],{"class":111,"line":146},[109,828,829],{"class":262},"### managing ssh keys with an ssh-agent easy\n",[109,831,832],{"class":111,"line":239},[109,833,256],{"emptyLinePlaceholder":255},[109,835,836,838,841,844,847,850],{"class":111,"line":252},[109,837,116],{"class":115},[109,839,840],{"class":119}," ssh-copy-id",[109,842,843],{"class":119}," root@",[109,845,846],{"class":225},"\u003C",[109,848,849],{"class":119},"CLUSTER_HOST",[109,851,852],{"class":225},"1>\n",[109,854,855,857,859,861,863,865],{"class":111,"line":259},[109,856,116],{"class":115},[109,858,840],{"class":119},[109,860,843],{"class":119},[109,862,846],{"class":225},[109,864,849],{"class":119},[109,866,867],{"class":225},"2>\n",[109,869,870],{"class":111,"line":266},[109,871,872],{"class":199},"...\n",[10,874,875],{},"We also need to deplay ceph's ssh key to the hosts. To get the key, you need to execute a ceph command:",[100,877,879],{"className":102,"code":878,"language":104,"meta":41,"style":41},"$ cephadm shell -- ceph cephadm get-pub-key\n",[106,880,881],{"__ignoreMap":41},[109,882,883,885,887,890,893,895,897],{"class":111,"line":112},[109,884,116],{"class":115},[109,886,648],{"class":119},[109,888,889],{"class":119}," shell",[109,891,892],{"class":199}," --",[109,894,700],{"class":119},[109,896,648],{"class":119},[109,898,899],{"class":119}," get-pub-key\n",[10,901,902],{},"Take the line starting with ssh-rsa and add it to the \u002Froot\u002F.ssh\u002Fauthorized_keys file on each host.",[10,904,905],{},"Now we can kick-start our cluster. I used a little Raspberry Pi4 (4GB RAM) running armbian jammy. You could also use a ODROID-C4. For just doing the manager-stuff, a little RPi3 with 1GB RAM would also be enough. I'll move the heavy tasks (mon, prometheus, graphana,...) to a VM on my big arm64 server in a VM with 8 GB RAM. So the Pi only has to do some easy tasks until it is replaced by some Pi4 with 8GB RAM, as soon as they are available again.",[10,907,908],{},"To sow the seed, issue the following command:",[100,910,912],{"className":102,"code":911,"language":104,"meta":41,"style":41},"$ sudo cephadm bootstrap \\\n  --mon-ip $(ip route get 1.1.1.1 | grep -oP 'src \\K\\S+')\n",[106,913,914,926],{"__ignoreMap":41},[109,915,916,918,920,922,924],{"class":111,"line":112},[109,917,116],{"class":115},[109,919,382],{"class":119},[109,921,648],{"class":119},[109,923,651],{"class":119},[109,925,442],{"class":199},[109,927,928,931,934,937,940,943,946,949,952,955,958],{"class":111,"line":126},[109,929,930],{"class":199},"  --mon-ip",[109,932,933],{"class":232}," $(",[109,935,936],{"class":115},"ip",[109,938,939],{"class":119}," route",[109,941,942],{"class":119}," get",[109,944,945],{"class":199}," 1.1.1.1",[109,947,948],{"class":225}," |",[109,950,951],{"class":115}," grep",[109,953,954],{"class":199}," -oP",[109,956,957],{"class":119}," 'src \\K\\S+'",[109,959,960],{"class":232},")\n",[10,962,963],{},"Now you are done with the initial step... You can log in using the hostname and the credentials as shown. If localhost is shown as the URL, simply replace it by the IP or the hostname of your mgr daemon.",[10,965,966],{},[39,967],{"alt":41,"src":968},"\u002Fimages\u002Fblog\u002F2023\u002F04\u002Fimage-24.png",[10,970,971],{},"Then log in to the web UI.",[10,973,974],{},[39,975],{"alt":41,"src":976},"\u002Fimages\u002Fblog\u002F2023\u002F02\u002Fimage-3.png",[10,978,979],{},"Now you are asked to provide a new password and re-login.",[10,981,982],{},[39,983],{"alt":41,"src":984},"\u002Fimages\u002Fblog\u002F2023\u002F02\u002Fimage-5.png",[10,986,987,988,991],{},"Ceph should greet you with the following screen. Just ignore the ",[106,989,990],{},"Expand Cluster","...",[10,993,994],{},[39,995],{"alt":41,"src":996},"\u002Fimages\u002Fblog\u002F2023\u002F02\u002Fimage-6.png",[10,998,999,1000,1003,1004,1007],{},"Now go to ",[106,1001,1002],{},"Cluster"," -> ",[106,1005,1006],{},"Hosts",". There should be only a single host, the mgr (with mon) you just bootstrapped.",[10,1009,1010],{},"Now head over to your other hosts with ssh and prepare the HDDs for use as OSD storages. You will scrub the partition table on it with the following command.",[100,1012,1014],{"className":102,"code":1013,"language":104,"meta":41,"style":41},"$ sgdisk --zap-all \u002Fdev\u002Fsda \u002Fdev\u002Fsdb \u003C...>\n",[106,1015,1016],{"__ignoreMap":41},[109,1017,1018,1020,1023,1026,1029,1032,1034,1037,1039],{"class":111,"line":112},[109,1019,116],{"class":115},[109,1021,1022],{"class":119}," sgdisk",[109,1024,1025],{"class":199}," --zap-all",[109,1027,1028],{"class":119}," \u002Fdev\u002Fsda",[109,1030,1031],{"class":119}," \u002Fdev\u002Fsdb",[109,1033,226],{"class":225},[109,1035,1036],{"class":119},"..",[109,1038,76],{"class":232},[109,1040,236],{"class":225},[10,1042,1043],{},"This should help, getting it prepared as OSDs.",[10,1045,1046,1047,1050],{},"Now head over to your ceph web UI again and select ",[106,1048,1049],{},"Add..."," in the hosts section.",[10,1052,1053],{},"TODO some Screenshots",[10,1055,1056],{},[39,1057],{"alt":41,"src":1058},"\u002Fimages\u002Fblog\u002F2023\u002F02\u002Fimage-7.png",[10,1060,1061],{},[39,1062],{"alt":41,"src":1063},"\u002Fimages\u002Fblog\u002F2023\u002F02\u002Fimage-8.png",[10,1065,1066],{},"I took the model as a filter, so every HDD with the same model string on every HC4 get simply added when ceph is bootstrapping the host. Since we do not have SSDs or NVMes, it makes no sense to define WAL or DB devices... They are also not available here :-)",[10,1068,1069],{},"We then can add services to this host.",[10,1071,1072],{},"Now distributing some daemons over the cluster.",[52,1074,1076],{"id":1075},"todo","TODO",[1078,1079,1080,1084,1087,1090,1093,1096,1099],"ul",{},[1081,1082,1083],"li",{},"Using labels to define the hosts to run services on",[1081,1085,1086],{},"Making Graphana and Prometheus work (SSL issues with self signed certificate)",[1081,1088,1089],{},"Creating and mounting a CephFS",[1081,1091,1092],{},"Using the object store (Swift and S3)",[1081,1094,1095],{},"Some more details and hints on the HW infrastructure",[1081,1097,1098],{},"Adjusting the CRUSH map",[1081,1100,1101],{},"Creating own certificates (create a new tutorial post)",[52,1103,1105],{"id":1104},"creating-a-cephfs","Creating a CephFS",[78,1107,1109],{"id":1108},"creating-a-cephfs-with-replication","Creating a CephFS with Replication",[10,1111,1112],{},"Easiest and most secure way to create a CephFS is to use two replication. Usually ceph stores Data with a redundancy of 3, meaning, it will create 2 copies of your data striped accross failure domains (usually hosts). In my setup with 3 hosts (each with 2 OSDs), this is the maximum.",[10,1114,1115],{},"The easiest solution, is to simply create the CephFS and let it implicitly create your pools and strategies.",[100,1117,1122],{"className":1118,"code":1120,"language":1121},[1119],"language-text","$ cephadm shell\n$ ceph fs volume create \u003CNAME_OF_FS>\n","text",[106,1123,1120],{"__ignoreMap":41},[10,1125,1126],{},"Thats all with creating it :-)",[10,1128,1129],{},"To mount it, you need to create a keyring:",[100,1131,1134],{"className":1132,"code":1133,"language":1121},[1119],"$ ceph auth get-or-create client.\u003CCLIENT_NAME> \\\n  mon 'allow r' \\\n  mds 'allow r, allow rw path=\u002F' \\\n  osd 'allow rw pool=erbw12-bigdata-fs' \\\n  -o \u002Froot\u002Fceph.client.\u003CCLIENT_NAME>.keyring\n$ ceph fs authorize \u003CNAME_OF_FS> client.\u003CCLIENT_NAME> \u002F rw\n",[106,1135,1133],{"__ignoreMap":41},[10,1137,1138,1139,1142],{},"Now, cat the keyring and copy paste the content to ",[106,1140,1141],{},"\u002Fetc\u002Fceph\u002Fceph.client.\u003CCLIENT_NAME>.keyring"," to the host, where you want to mount your CephFS. Now go to this other host, install ceph-fuse package and execute the following:",[100,1144,1146],{"className":102,"code":1145,"language":104,"meta":41,"style":41},"$ sudo -i\n$ mkdir \u002Fetc\u002Fceph\n$ cd \u002Fetc\u002Fceph\u002F\n$ ssh-keygen -t ed25519\n$ ssh-copy-id root@\u003CCEPH_MON_HOST>\n$ scp root@\u003CCEPH_MON_HOST>:\u002Fetc\u002Fceph\u002Fceph.conf .\n$ echo \"\u003CYOUR_COPIED_KEYRING>\" > ceph.client.\u003CCLIENT_NAME>.keyring\n\n# another way is to get the key through ssh from the client \n# host if your ceph command is accessible outside of the shell container\n\n$ ssh root@\u003CCEPH_MON_HOST> ceph auth get client.\u003CCLIENT_NAME> \\\n  > ceph.client.\u003CCLIENT_NAME>.keyring\n",[106,1147,1148,1156,1166,1175,1186,1204,1227,1254,1258,1263,1268,1272,1309],{"__ignoreMap":41},[109,1149,1150,1152,1154],{"class":111,"line":112},[109,1151,116],{"class":115},[109,1153,382],{"class":119},[109,1155,803],{"class":199},[109,1157,1158,1160,1163],{"class":111,"line":126},[109,1159,116],{"class":115},[109,1161,1162],{"class":119}," mkdir",[109,1164,1165],{"class":119}," \u002Fetc\u002Fceph\n",[109,1167,1168,1170,1172],{"class":111,"line":136},[109,1169,116],{"class":115},[109,1171,372],{"class":119},[109,1173,1174],{"class":119}," \u002Fetc\u002Fceph\u002F\n",[109,1176,1177,1179,1181,1183],{"class":111,"line":146},[109,1178,116],{"class":115},[109,1180,810],{"class":119},[109,1182,813],{"class":199},[109,1184,1185],{"class":119}," ed25519\n",[109,1187,1188,1190,1192,1194,1196,1199,1202],{"class":111,"line":239},[109,1189,116],{"class":115},[109,1191,840],{"class":119},[109,1193,843],{"class":119},[109,1195,846],{"class":225},[109,1197,1198],{"class":119},"CEPH_MON_HOS",[109,1200,1201],{"class":232},"T",[109,1203,236],{"class":225},[109,1205,1206,1208,1211,1213,1215,1217,1219,1221,1224],{"class":111,"line":252},[109,1207,116],{"class":115},[109,1209,1210],{"class":119}," scp",[109,1212,843],{"class":119},[109,1214,846],{"class":225},[109,1216,1198],{"class":119},[109,1218,1201],{"class":232},[109,1220,716],{"class":225},[109,1222,1223],{"class":119},":\u002Fetc\u002Fceph\u002Fceph.conf",[109,1225,1226],{"class":119}," .\n",[109,1228,1229,1231,1234,1237,1239,1242,1244,1247,1249,1251],{"class":111,"line":259},[109,1230,116],{"class":115},[109,1232,1233],{"class":119}," echo",[109,1235,1236],{"class":119}," \"\u003CYOUR_COPIED_KEYRING>\"",[109,1238,246],{"class":225},[109,1240,1241],{"class":119}," ceph.client.",[109,1243,846],{"class":225},[109,1245,1246],{"class":119},"CLIENT_NAM",[109,1248,233],{"class":232},[109,1250,716],{"class":225},[109,1252,1253],{"class":119},".keyring\n",[109,1255,1256],{"class":111,"line":266},[109,1257,256],{"emptyLinePlaceholder":255},[109,1259,1260],{"class":111,"line":289},[109,1261,1262],{"class":262},"# another way is to get the key through ssh from the client \n",[109,1264,1265],{"class":111,"line":295},[109,1266,1267],{"class":262},"# host if your ceph command is accessible outside of the shell container\n",[109,1269,1270],{"class":111,"line":313},[109,1271,256],{"emptyLinePlaceholder":255},[109,1273,1274,1276,1279,1281,1283,1285,1287,1289,1291,1294,1296,1299,1301,1303,1305,1307],{"class":111,"line":318},[109,1275,116],{"class":115},[109,1277,1278],{"class":119}," ssh",[109,1280,843],{"class":119},[109,1282,846],{"class":225},[109,1284,1198],{"class":119},[109,1286,1201],{"class":232},[109,1288,716],{"class":225},[109,1290,700],{"class":119},[109,1292,1293],{"class":119}," auth",[109,1295,942],{"class":119},[109,1297,1298],{"class":119}," client.",[109,1300,846],{"class":225},[109,1302,1246],{"class":119},[109,1304,233],{"class":232},[109,1306,716],{"class":225},[109,1308,442],{"class":199},[109,1310,1311,1314,1316,1318,1320,1322,1324],{"class":111,"line":333},[109,1312,1313],{"class":225},"  >",[109,1315,1241],{"class":119},[109,1317,846],{"class":225},[109,1319,1246],{"class":119},[109,1321,233],{"class":232},[109,1323,716],{"class":225},[109,1325,1253],{"class":119},[10,1327,1328],{},"Now you can mount your CephFS",[100,1330,1332],{"className":102,"code":1331,"language":104,"meta":41,"style":41},"$ mkdir \u003CYOUR_MOUNT_POINT>\n$ ceph-fuse -n client.\u003CCLIENT_NAME> none \\\n  -m \u003CCEPH_MON_HOST> \u003CYOUR_MOUNT_POINT>\n",[106,1333,1334,1349,1374],{"__ignoreMap":41},[109,1335,1336,1338,1340,1342,1345,1347],{"class":111,"line":112},[109,1337,116],{"class":115},[109,1339,1162],{"class":119},[109,1341,226],{"class":225},[109,1343,1344],{"class":119},"YOUR_MOUNT_POIN",[109,1346,1201],{"class":232},[109,1348,236],{"class":225},[109,1350,1351,1353,1356,1359,1361,1363,1365,1367,1369,1372],{"class":111,"line":126},[109,1352,116],{"class":115},[109,1354,1355],{"class":119}," ceph-fuse",[109,1357,1358],{"class":199}," -n",[109,1360,1298],{"class":119},[109,1362,846],{"class":225},[109,1364,1246],{"class":119},[109,1366,233],{"class":232},[109,1368,716],{"class":225},[109,1370,1371],{"class":119}," none",[109,1373,442],{"class":199},[109,1375,1376,1379,1381,1383,1385,1387,1389,1391,1393],{"class":111,"line":136},[109,1377,1378],{"class":199},"  -m",[109,1380,226],{"class":225},[109,1382,1198],{"class":119},[109,1384,1201],{"class":232},[109,1386,716],{"class":225},[109,1388,226],{"class":225},[109,1390,1344],{"class":119},[109,1392,1201],{"class":232},[109,1394,236],{"class":225},[10,1396,1397,1398,1401,1402,1405],{},"You can then check, if everything worked (",[106,1399,1400],{},"df -h"," or ",[106,1403,1404],{},"mount",") and put your data in",[78,1407,1409],{"id":1408},"creating-a-cephfs-with-erasure-coding","Creating a CephFS with Erasure Coding",[10,1411,1412],{},"Creating a pool with erasure code as data pool, you need to create the fs a bit more manually. First create your pools using the web UI.",[10,1414,1415],{},[39,1416],{"alt":41,"src":1417},"\u002Fimages\u002Fblog\u002F2023\u002F04\u002Fimage-25.png",[10,1419,1420,1421,1424],{},"If you want to use EC for CephFS, checking the ",[106,1422,1423],{},"EC Overwrites"," is mandatory. Otherwise, Ceph will not accept the pool for cephfs. As the EC profile, you need to keep your cluster in mind. The following example will not work on a 3 node cluster (my one has only 3 failure domains = 3 hosts).",[10,1426,1427],{},[39,1428],{"alt":41,"src":1429},"\u002Fimages\u002Fblog\u002F2023\u002F04\u002Fimage-26.png",[10,1431,1432],{},"You also need to create a replication pool for the CephFS metadata.",[10,1434,1435],{},[39,1436],{"alt":41,"src":1437},"\u002Fimages\u002Fblog\u002F2023\u002F04\u002Fimage-27.png",[10,1439,1440,1441,1444],{},"On the ",[106,1442,1443],{},"cephadm shell",", now issue the following commands:",[100,1446,1448],{"className":102,"code":1447,"language":104,"meta":41,"style":41},"$ ceph fs new \u003CFS_NAME> \u003CMETA_POOL_NAME> \u003CDATA_POOL_NAME>\n",[106,1449,1450],{"__ignoreMap":41},[109,1451,1452,1454,1456,1459,1462,1464,1467,1469,1471,1473,1476,1478,1480,1482,1485,1487],{"class":111,"line":112},[109,1453,116],{"class":115},[109,1455,700],{"class":119},[109,1457,1458],{"class":119}," fs",[109,1460,1461],{"class":119}," new",[109,1463,226],{"class":225},[109,1465,1466],{"class":119},"FS_NAM",[109,1468,233],{"class":232},[109,1470,716],{"class":225},[109,1472,226],{"class":225},[109,1474,1475],{"class":119},"META_POOL_NAM",[109,1477,233],{"class":232},[109,1479,716],{"class":225},[109,1481,226],{"class":225},[109,1483,1484],{"class":119},"DATA_POOL_NAM",[109,1486,233],{"class":232},[109,1488,236],{"class":225},[10,1490,1491],{},"For authorization, refer to the stuff in replication pool above.",[78,1493,1495],{"id":1494},"mounting-the-cephfs","Mounting the CephFS",[10,1497,1498,1499,1502],{},"In this example, we will mount the root of the ceph file system on a client. If you want to restrict the access to the filesystem, you need to change ",[106,1500,1501],{},"\u002F"," with the path you want to mount on the client.",[10,1504,1505],{},"First, we need tp create a client secret. This can easily be done with the following command:",[100,1507,1509],{"className":102,"code":1508,"language":104,"meta":41,"style":41},"$ cephadm shell\n$ ceph fs ls\n...list of your fs-es...\n$ ceph fs authorize \u003CFS_NAME> client.\u003CSOME_NAME> \u002F rw\n\n    key = acbdef....xyz==\n$ ceph auth get client.\u003CSOME_NAME> >> \u002Fetc\u002Fceph\u002Fceph.client.\u003CSOME_NAME>.keyring\n\n### Now copy ceph.conf, ceph.pub and ceph.client.\u003CSOME_NAME>.keyring from\n### \u002Fetc\u002Fceph to your client's \u002Fetc\u002Fceph\u002F folder\n",[106,1510,1511,1519,1530,1546,1582,1586,1597,1633,1637,1642],{"__ignoreMap":41},[109,1512,1513,1515,1517],{"class":111,"line":112},[109,1514,116],{"class":115},[109,1516,648],{"class":119},[109,1518,681],{"class":119},[109,1520,1521,1523,1525,1527],{"class":111,"line":126},[109,1522,116],{"class":115},[109,1524,700],{"class":119},[109,1526,1458],{"class":119},[109,1528,1529],{"class":119}," ls\n",[109,1531,1532,1534,1537,1540,1543],{"class":111,"line":136},[109,1533,1036],{"class":199},[109,1535,1536],{"class":119},".list",[109,1538,1539],{"class":119}," of",[109,1541,1542],{"class":119}," your",[109,1544,1545],{"class":119}," fs-es...\n",[109,1547,1548,1550,1552,1554,1557,1559,1561,1563,1565,1567,1569,1572,1574,1576,1579],{"class":111,"line":146},[109,1549,116],{"class":115},[109,1551,700],{"class":119},[109,1553,1458],{"class":119},[109,1555,1556],{"class":119}," authorize",[109,1558,226],{"class":225},[109,1560,1466],{"class":119},[109,1562,233],{"class":232},[109,1564,716],{"class":225},[109,1566,1298],{"class":119},[109,1568,846],{"class":225},[109,1570,1571],{"class":119},"SOME_NAM",[109,1573,233],{"class":232},[109,1575,716],{"class":225},[109,1577,1578],{"class":119}," \u002F",[109,1580,1581],{"class":119}," rw\n",[109,1583,1584],{"class":111,"line":239},[109,1585,256],{"emptyLinePlaceholder":255},[109,1587,1588,1591,1594],{"class":111,"line":252},[109,1589,1590],{"class":115},"    key",[109,1592,1593],{"class":119}," =",[109,1595,1596],{"class":119}," acbdef....xyz==\n",[109,1598,1599,1601,1603,1605,1607,1609,1611,1613,1615,1617,1620,1623,1625,1627,1629,1631],{"class":111,"line":259},[109,1600,116],{"class":115},[109,1602,700],{"class":119},[109,1604,1293],{"class":119},[109,1606,942],{"class":119},[109,1608,1298],{"class":119},[109,1610,846],{"class":225},[109,1612,1571],{"class":119},[109,1614,233],{"class":232},[109,1616,716],{"class":225},[109,1618,1619],{"class":225}," >>",[109,1621,1622],{"class":119}," \u002Fetc\u002Fceph\u002Fceph.client.",[109,1624,846],{"class":225},[109,1626,1571],{"class":119},[109,1628,233],{"class":232},[109,1630,716],{"class":225},[109,1632,1253],{"class":119},[109,1634,1635],{"class":111,"line":266},[109,1636,256],{"emptyLinePlaceholder":255},[109,1638,1639],{"class":111,"line":289},[109,1640,1641],{"class":262},"### Now copy ceph.conf, ceph.pub and ceph.client.\u003CSOME_NAME>.keyring from\n",[109,1643,1644],{"class":111,"line":295},[109,1645,1646],{"class":262},"### \u002Fetc\u002Fceph to your client's \u002Fetc\u002Fceph\u002F folder\n",[10,1648,1649,1650,1655,1656,1660],{},"Since getting the latest cephfs fuse package on recent distributions is not the easiest way, I prepared a docker container to get it up and running quickly ",[20,1651,1654],{"href":1652,"rel":1653},"https:\u002F\u002Fgithub.com\u002Fthe78mole\u002Fdocker-cephfs\u002Ftree\u002Fmain",[24],"here",". If you want to install it natively on your host, then just follow the instructions on the Ceph pages ",[20,1657,1654],{"href":1658,"rel":1659},"https:\u002F\u002Fdocs.ceph.com\u002Fen\u002Flatest\u002Fcephfs\u002Fmount-using-fuse\u002F",[24],". I suggest to use the ceph-fuse, since it does not involve kernel modules and you can select the same ceph version as your cluster runs easily.",[78,1662,1664],{"id":1663},"editing-the-crush-map","Editing the CRUSH map",[10,1666,1667,1668,1672,1673,1677],{},"If you want to distribute your cluster physically over different locations or rooms and want to also pin your failure domains to it, you need to change the CRUSH map. Therefore, you need to add the buckets with certain types (usually region, datacenter, room,... see ",[20,1669,1654],{"href":1670,"rel":1671},"https:\u002F\u002Fdocs.ceph.com\u002Fen\u002Flatest\u002Frados\u002Foperations\u002Fcrush-map\u002F#types-and-buckets",[24]," for a list of predefined bucket types). After adding, you need to \"move\" them to the correct locations. How this is done, is described ",[20,1674,1654],{"href":1675,"rel":1676},"https:\u002F\u002Fwww.netways.de\u002Fblog\u002F2017\u002F03\u002F03\u002Fceph-crush-rules-uber-die-cli\u002F",[24]," and finally move the osds over to the appropriate buckets.",[10,1679,1680],{},"To avoid any wired problems, you should follow this process:",[1078,1682,1683,1686,1691,1696,1699,1702,1705,1710],{},[1081,1684,1685],{},"make sure ceph is HEALTH_OK",[1081,1687,1688],{},[106,1689,1690],{},"ceph osd set noout",[1081,1692,1693],{},[106,1694,1695],{},"ceph osd set norebalance",[1081,1697,1698],{},"edit crush map",[1081,1700,1701],{},"wait for peering to finish, all PGs must be active+clean",[1081,1703,1704],{},"lots of PGs will also be re-mapped",[1081,1706,1707],{},[106,1708,1709],{},"ceph osd unset norebalance",[1081,1711,1712],{},[106,1713,1714],{},"ceph osd unset noout",[10,1716,1717,1718,1722],{},"To edit the crush map, you can use the following commands (see ",[20,1719,1654],{"href":1720,"rel":1721},"https:\u002F\u002Fdocs.ceph.com\u002Fen\u002Flatest\u002Frados\u002Foperations\u002Fcrush-map-edits\u002F",[24],"):",[100,1724,1727],{"className":1725,"code":1726,"language":1121},[1119],"$ cephadm shell\n$ ceph ceph osd getcrushmap -o crushmap.bin\n$ crushtool -d crushmap.bin -o crushmap.txt\n### view and possibly edit the map and save it as crushmap_new.txt\n$ crushtool -c crushmap_new.txt -o crushmap_new.bin\n$ ceph osd setcrushmap -i crushmap_new.bin\n",[106,1728,1726],{"__ignoreMap":41},[52,1730,1732],{"id":1731},"additional-hints","Additional Hints",[78,1734,1736],{"id":1735},"setting-the-minimal-ceph-version","Setting the minimal ceph version",[10,1738,1739],{},"For many new features, you need to restrict your cluster to a certain minimum version of ceph. To do so, check and set the minimum required versions with the following commands (in my example I run reef on all nodes):",[100,1741,1744],{"className":1742,"code":1743,"language":1121},[1119],"$ cephadm shell\n$ ceph osd get-require-min-compat-client\nluminous\n$ ceph osd set-require-min-compat-client reef\n",[106,1745,1743],{"__ignoreMap":41},[10,1747,1748,1749,1752],{},"With this setting, you can e.g. use the read balancer optimization features of reef (",[106,1750,1751],{},"ceph balancer mode \u003Cread|upmap-read>",").",[78,1754,1756],{"id":1755},"adjusting-mgr-host-distribution-1-mgrs","Adjusting mgr Host Distribution (1+ mgrs)",[10,1758,1759,1760,1762],{},"I struggled quite heavily to find out, how mgr daemons get distributed across the hosts and never was happy with it. Every time, I entered (in ",[106,1761,1443],{},")",[100,1764,1767],{"className":1765,"code":1766,"language":1121},[1119],"ceph orch apply mgr \u003Chost_to_add>\n",[106,1768,1766],{"__ignoreMap":41},[10,1770,1771],{},"The manager daemon just hopped over to the host I want to apply. I could not find a solution, how to get two managers back in again and also Chat-GPT could not help. Out of frustration I tried to add more than one host name at the end of the command and it lead to success:",[100,1773,1776],{"className":1774,"code":1775,"language":1121},[1119],"ceph orch apply mgr \u003Chost1>,\u003Chost2>\n",[106,1777,1775],{"__ignoreMap":41},[10,1779,1780],{},"To ensure, that mgr with dashboard is running on the right host, just execute the command with the single host first and then run it again with all managers you want to run. This ensures, that the already running mgr is the current active one.",[10,1782,1783,1784,1787,1788,1791],{},"Unfortunately, I could not find a way to make the dashboard available on the other hosts. If you ask ",[106,1785,1786],{},"ceph mgr services"," it always shows the old address, if the active mgr jumped to a different host. ",[106,1789,1790],{},"cephadm"," itself seems to know, where the mgr is running.",[10,1793,1794,1795,1798],{},"Also keep in mind, that it is suggested to have managers running on hosts that also serve as monitors (",[106,1796,1797],{},"mon","), according to the ceph administrators docs.",[78,1800,1802],{"id":1801},"maintenance-eg-rebooting-a-host","Maintenance (e.g. rebooting a host)",[10,1804,1805],{},"If you want to do some maintenance, e.g. simply rebooting a host with one or more running osds, you should first tell ceph to not compensate for the missing osd. So you can follow this procedure",[100,1807,1810],{"className":1808,"code":1809,"language":1121},[1119],"$ cephadm shell\n$ ceph osd set noout\n$ ceph osd set norebalance \n",[106,1811,1809],{"__ignoreMap":41},[10,1813,1814,1815,1818],{},"Then execute the stuff you want, e.g. ",[106,1816,1817],{},"sudo reboot"," on the osd host and when all daemons are back in ceph, get back to normal operation mode",[100,1820,1823],{"className":1821,"code":1822,"language":1121},[1119],"$ ceph -s\n#### Check if everything is fine...\n$ ceph osd unset noout\n$ ceph osd unset norebalance\n",[106,1824,1822],{"__ignoreMap":41},[52,1826,1828],{"id":1827},"troubleshooting","Troubleshooting",[78,1830,1832],{"id":1831},"pgs-not-remapping","PGs not remapping",[10,1834,1835,1836,1839,1840,1843],{},"If you changed your crush map (e.g. introducing another root and moving osds over to the new root), it can happen, that it can not shift over the placement groups and ",[106,1837,1838],{},"ceph -s"," shows all PGs as ",[106,1841,1842],{},"active+clean+remapped"," but backfilling never happens.",[10,1845,1846],{},"You can identify the problem by fist listing your PGs and then inspecting the PG in detail",[100,1848,1851],{"className":1849,"code":1850,"language":1121},[1119],"$ ceph pg ls\n....\n5.17 (just an example in your output)....\n...\n$ ceph pg map 5.17\nosdmap e230 pg 5.17 (5.17) -> up  acting \n",[106,1852,1850],{"__ignoreMap":41},[10,1854,1855],{},"The high numbers (2147483647) indicate, that the PG is not mapped to any OSD but they still belong to the acting ones 0,1,2,3,4 & 5 (the order is not relevant).",[10,1857,1858],{},"To solve this, you need to edit the crush map. Unfortunately, there is only a CLI command to change the device-class, but not any to change the crush map root or leaf to select.",[100,1860,1863],{"className":1861,"code":1862,"language":1121},[1119],"$ cephadm shell\n$ ceph osd getcrushmap -o crush.map\n25  # This number is the epoch of your map... \n    # You can use it to identify your changes \n$ crushtool -d crush.map -o crush.25.map.txt\n$ cp crush.25.map.txt crush.26.map.txt\n$ vi crush.26.map.txt   # Edit the lines with \"default\", \n                        # the old name of the crush root in it\n",[106,1864,1862],{"__ignoreMap":41},[10,1866,1867],{},"Edit the crush map...",[100,1869,1872],{"className":1870,"code":1871,"language":1121},[1119],"...\n# buckets\nroot default {\n        id -1               # do not change unnecessarily\n        id -2 class hdd     # do not change unnecessarily\n        # weight 0.00000\n        alg straw2\n        hash 0  # rjenkins1\n}\n# ...\n# I have many other leafs here for zone, region, rooms,...\n# ...\nroot root-mole {\n        id -19              # do not change unnecessarily\n        id -20 class hdd    # do not change unnecessarily\n        # weight 21.83212\n        alg straw2\n        hash 0  # rjenkins1\n        item reg-europe weight 21.83212\n}\n\nrule replicated_rule {\n        id 0\n        type replicated\n        # step take default                # \u003C--- This is the 1st old line\n        step take root-mole                # ...changed to this\n        step chooseleaf firstn 0 type host\n        step emit\n}\nrule Pool_EC_k4m2 {\n        id 2\n        type erasure\n        step set_chooseleaf_tries 5\n        step set_choose_tries 100\n        # step take default class hdd      # \u003C--- This is the 2nd old line\n        step take root-moles class hdd     # ...changed to this      \n        step chooseleaf indep 0 type host\n        step emit\n}\n",[106,1873,1871],{"__ignoreMap":41},[10,1875,1876],{},"Then execute the commands to make it active",[100,1878,1881],{"className":1879,"code":1880,"language":1121},[1119],"$ crushtool -c crush.26.map.txt -o crush.26.map\n$ ceph osd setcrushmap -i crush.26.map\n",[106,1882,1880],{"__ignoreMap":41},[10,1884,1885],{},"If you did no typo or any other mistake, it should soon start to remap the PGS and show you something similar to the following:",[100,1887,1890],{"className":1888,"code":1889,"language":1121},[1119],"$ ceph pg map 5.17\nosdmap e242 pg 5.17 (5.17) -> up  acting \n",[106,1891,1889],{"__ignoreMap":41},[78,1893,1895],{"id":1894},"stray-daemon","Stray Daemon",[10,1897,1898],{},"Sometimes it happens, that Ceph is complaining (warning) about a stray host with a stray daemon. This can be solved by executing the ceph health detail command in a ceph shell and then, when you found the daemon that is raising this, just stop and disable its system service.",[100,1900,1903],{"className":1901,"code":1902,"language":1121},[1119],"$ cephadm shell\n$ ceph health detail\nHEALTH_WARN 1 stray host(s) with 1 daemon(s) not managed by cephadm\n CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm\n    stray host ceph-erbw12-pi4-000 has 1 stray daemons: ['mds.FS_ECK4M2_BigSpace.rpi4b.diosuz']\n",[106,1904,1902],{"__ignoreMap":41},[10,1906,1907],{},"Then log in on your host with the stray daemon.",[100,1909,1912],{"className":1910,"code":1911,"language":1121},[1119],"$ systemctl status ceph-...@mds.FS_ECK4M2_BigSpace.rpi4b.diosuz.service\n● ceph-...@mds.FS_ECK4M2_BigSpace.rpi4b.diosuz.service - Ceph mds.FS_ECK4M2_BigSpac>\n     Loaded: loaded (\u002Fetc\u002Fsystemd\u002Fsystem\u002Fceph-...@.service; enabled; vendor preset:>\n     Active: active (running) since Fri 2024-02-09 01:54:00 CET; 2 days ago\n...\n",[106,1913,1911],{"__ignoreMap":41},[10,1915,1916],{},"Just stop and disable it",[100,1918,1921],{"className":1919,"code":1920,"language":1121},[1119],"$ systemctl status ceph-...@mds.FS_ECK4M2_BigSpace.rpi4b.diosuz.service\n$ systemctl disable ceph-...@mds.FS_ECK4M2_BigSpace.rpi4b.diosuz.service\n",[106,1922,1920],{"__ignoreMap":41},[10,1924,1925],{},"If you then ask for the healt in the ceph shell, you will get a HEALTH OK",[100,1927,1930],{"className":1928,"code":1929,"language":1121},[1119],"$ ceph health detail\nHEALTH_OK\n",[106,1931,1929],{"__ignoreMap":41},[78,1933,1935],{"id":1934},"graphs-do-not-show-up-in-dashboard","Graphs do not show up in dashboard",[10,1937,1938],{},"If your graphs on the dashboard page are empty, then the server_addr, ssl_server_port and possibly server_port are not set correctly. You can fix this on the ceph shell",[100,1940,1943],{"className":1941,"code":1942,"language":1121},[1119],"$ cephadm shell\n$ ceph config get mgr mgr\u002Fdashboard\u002Fserver_addr\n::   # This shows when nothing is configured\n# Now set it to the address you access the dashboard\n$ ceph config set mgr mgr\u002Fdashboard\u002Fserver_addr \u003CYOUR_DASHBOARD_IP>\n$ ceph config set mgr mgr\u002Fdashboard\u002Fssl_server_port 8443   # this is the default\n",[106,1944,1942],{"__ignoreMap":41},[78,1946,1948],{"id":1947},"performance-graphs-not-showing-grafana-problem","Performance Graphs not showing (Grafana Problem)",[10,1950,1951],{},"Here I currently have no solution, but I'll update it as soon as I have one...",[78,1953,1955],{"id":1954},"ceph-node-diskspace-warning","Ceph Node Diskspace Warning",[10,1957,1958,1959,1962,1963,1966],{},"This warning can be mostly ignored and it is not documented anywhere in the helt check documentation. The warning araises because Armbian is using a RAM-log (",[106,1960,1961],{},"\u002Fvar\u002Flog",") that get's rsynced to HDD (SD card on ",[106,1964,1965],{},"\u002Fvar\u002Flog.hdd",") every day. It is also rotated, compressed and purged on the card daily. This warning will usually be resolved automatically, especially with the 256M ramlog setting (40M was armbian default) and should not pop up to often or only after setting up the cluster, while a huge amount of loggin is created.",[10,1968,1969,1970,76],{},"If the problem persists, you could dive into details using the ",[20,1971,1974],{"href":1972,"rel":1973},"https:\u002F\u002Fdocs.ceph.com\u002Fen\u002Flatest\u002Frados\u002Foperations\u002Fhealth-checks\u002F",[24],"healt check operation documentation",[78,1976,1978],{"id":1977},"manager-crashed","Manager crashed",[10,1980,1981],{},"After one day of runtime, ceph GUI reported a crash of the manager daemon. To inspect this, you need the ceph command, which is included in ceph-common, we installed previously without a need at that time. But for administrative purposes, it is quite handy :-)",[10,1983,1984],{},"To inspect the crash, we will first list all crashes (not only new ones):",[100,1986,1988],{"className":102,"code":1987,"language":104,"meta":41,"style":41},"$ ceph crash ls\n## Alternative to show only new crashes\n$ ceph crash ls-new\n",[106,1989,1990,2001,2006],{"__ignoreMap":41},[109,1991,1992,1994,1996,1999],{"class":111,"line":112},[109,1993,116],{"class":115},[109,1995,700],{"class":119},[109,1997,1998],{"class":119}," crash",[109,2000,1529],{"class":119},[109,2002,2003],{"class":111,"line":126},[109,2004,2005],{"class":262},"## Alternative to show only new crashes\n",[109,2007,2008,2010,2012,2014],{"class":111,"line":136},[109,2009,116],{"class":115},[109,2011,700],{"class":119},[109,2013,1998],{"class":119},[109,2015,2016],{"class":119}," ls-new\n",[10,2018,2019],{},"We will now get a detailes crash report.",[10,2021,2022],{},[39,2023],{"alt":41,"src":2024},"\u002Fimages\u002Fblog\u002F2023\u002F02\u002Fimage-13.png",[10,2026,2027],{},"In my case, I'm not sure, if this is just a side effect of the Healt-Warn state of the cluster, not being able to pull device metrics. We will see, if it persists :-)",[10,2029,2030],{},"To get rid of the warning, just issue an archive command",[100,2032,2034],{"className":102,"code":2033,"language":104,"meta":41,"style":41},"$ ceph crash archive \u003CID>\n# Or to archive all listed (not showing up in ls-new)\n$ ceph crash archive-all\n",[106,2035,2036,2057,2062],{"__ignoreMap":41},[109,2037,2038,2040,2042,2044,2047,2049,2052,2055],{"class":111,"line":112},[109,2039,116],{"class":115},[109,2041,700],{"class":119},[109,2043,1998],{"class":119},[109,2045,2046],{"class":119}," archive",[109,2048,226],{"class":225},[109,2050,2051],{"class":119},"I",[109,2053,2054],{"class":232},"D",[109,2056,236],{"class":225},[109,2058,2059],{"class":111,"line":126},[109,2060,2061],{"class":262},"# Or to archive all listed (not showing up in ls-new)\n",[109,2063,2064,2066,2068,2070],{"class":111,"line":136},[109,2065,116],{"class":115},[109,2067,700],{"class":119},[109,2069,1998],{"class":119},[109,2071,2072],{"class":119}," archive-all\n",[10,2074,2075,2076,2079],{},"To delete older crashes (and also remove them from ",[106,2077,2078],{},"ceph crash ls","), issue the following command.",[100,2081,2083],{"className":102,"code":2082,"language":104,"meta":41,"style":41},"$ ceph crash prune \u003COLDER_THAN_DAYS>\n$ ceph crash prune 3                # Will remove crashes older than 3 days\n",[106,2084,2085,2106],{"__ignoreMap":41},[109,2086,2087,2089,2091,2093,2096,2098,2101,2104],{"class":111,"line":112},[109,2088,116],{"class":115},[109,2090,700],{"class":119},[109,2092,1998],{"class":119},[109,2094,2095],{"class":119}," prune",[109,2097,226],{"class":225},[109,2099,2100],{"class":119},"OLDER_THAN_DAY",[109,2102,2103],{"class":232},"S",[109,2105,236],{"class":225},[109,2107,2108,2110,2112,2114,2116,2119],{"class":111,"line":126},[109,2109,116],{"class":115},[109,2111,700],{"class":119},[109,2113,1998],{"class":119},[109,2115,2095],{"class":119},[109,2117,2118],{"class":199}," 3",[109,2120,2121],{"class":262},"                # Will remove crashes older than 3 days\n",[78,2123,2125],{"id":2124},"the-oled-does-not-yet-work-on-bullseye-unstable",[2126,2127,2128],"strong",{},"The OLED does not yet work on bullseye unstable",[10,2130,2131],{},"Now update your repository cache and do an upgrade of the system. You should also change your timezone settings for the OLED of the HC4 to show the correct local time.",[100,2133,2135],{"className":102,"code":2134,"language":104,"meta":41,"style":41},"$ apt update\n$ dpkg-reconfigure tzdata\n$ apt upgrade\n\n#### if using the ODROID HC4\n$ apt install odroid-homecloud-display wget curl gpg jq\n$ reboot\n",[106,2136,2137,2145,2155,2164,2168,2173,2195],{"__ignoreMap":41},[109,2138,2139,2141,2143],{"class":111,"line":112},[109,2140,116],{"class":115},[109,2142,184],{"class":119},[109,2144,187],{"class":119},[109,2146,2147,2149,2152],{"class":111,"line":126},[109,2148,116],{"class":115},[109,2150,2151],{"class":119}," dpkg-reconfigure",[109,2153,2154],{"class":119}," tzdata\n",[109,2156,2157,2159,2161],{"class":111,"line":136},[109,2158,116],{"class":115},[109,2160,184],{"class":119},[109,2162,2163],{"class":119}," upgrade\n",[109,2165,2166],{"class":111,"line":146},[109,2167,256],{"emptyLinePlaceholder":255},[109,2169,2170],{"class":111,"line":239},[109,2171,2172],{"class":262},"#### if using the ODROID HC4\n",[109,2174,2175,2177,2179,2181,2184,2187,2189,2192],{"class":111,"line":252},[109,2176,116],{"class":115},[109,2178,184],{"class":119},[109,2180,209],{"class":119},[109,2182,2183],{"class":119}," odroid-homecloud-display",[109,2185,2186],{"class":119}," wget",[109,2188,436],{"class":119},[109,2190,2191],{"class":119}," gpg",[109,2193,2194],{"class":119}," jq\n",[109,2196,2197,2199],{"class":111,"line":259},[109,2198,116],{"class":115},[109,2200,338],{"class":119},[10,2202,2203],{},"I struggeled a lot to install ceph on the ARM64 ODROID HC4... Here are my misleaded tries",[100,2205,2207],{"className":102,"code":2206,"language":104,"meta":41,"style":41},"$ mkdir -m 0755 -p \u002Fetc\u002Fapt\u002Fkeyrings\n$ curl -fsSL https:\u002F\u002Fdownload.docker.com\u002Flinux\u002Fdebian\u002Fgpg | \\\n   sudo gpg --dearmor -o \u002Fetc\u002Fapt\u002Fkeyrings\u002Fdocker.gpg\n$ echo \\\n    \"deb [arch=$(dpkg --print-architecture) \\\n     signed-by=\u002Fetc\u002Fapt\u002Fkeyrings\u002Fdocker.gpg] \\\n     https:\u002F\u002Fdownload.docker.com\u002Flinux\u002Fdebian \\\n     $(lsb_release -cs) stable\" | \\\n    sudo tee \u002Fetc\u002Fapt\u002Fsources.list.d\u002Fdocker.list > \u002Fdev\u002Fnull\n$ apt update\n$ apt-get install docker-ce docker-ce-cli containerd.io \\\n   docker-buildx-plugin docker-compose-plugin\n$ sudo docker run hello-world\n",[106,2208,2209,2227,2243,2259,2267,2284,2291,2298,2316,2332,2340,2360,2368],{"__ignoreMap":41},[109,2210,2211,2213,2215,2218,2221,2224],{"class":111,"line":112},[109,2212,116],{"class":115},[109,2214,1162],{"class":119},[109,2216,2217],{"class":199}," -m",[109,2219,2220],{"class":199}," 0755",[109,2222,2223],{"class":199}," -p",[109,2225,2226],{"class":119}," \u002Fetc\u002Fapt\u002Fkeyrings\n",[109,2228,2229,2231,2233,2236,2239,2241],{"class":111,"line":126},[109,2230,116],{"class":115},[109,2232,436],{"class":119},[109,2234,2235],{"class":199}," -fsSL",[109,2237,2238],{"class":119}," https:\u002F\u002Fdownload.docker.com\u002Flinux\u002Fdebian\u002Fgpg",[109,2240,948],{"class":225},[109,2242,442],{"class":199},[109,2244,2245,2248,2250,2253,2256],{"class":111,"line":136},[109,2246,2247],{"class":115},"   sudo",[109,2249,2191],{"class":119},[109,2251,2252],{"class":199}," --dearmor",[109,2254,2255],{"class":199}," -o",[109,2257,2258],{"class":119}," \u002Fetc\u002Fapt\u002Fkeyrings\u002Fdocker.gpg\n",[109,2260,2261,2263,2265],{"class":111,"line":146},[109,2262,116],{"class":115},[109,2264,1233],{"class":119},[109,2266,442],{"class":199},[109,2268,2269,2272,2275,2278,2281],{"class":111,"line":239},[109,2270,2271],{"class":119},"    \"deb [arch=$(",[109,2273,2274],{"class":115},"dpkg",[109,2276,2277],{"class":199}," --print-architecture",[109,2279,2280],{"class":119},") ",[109,2282,2283],{"class":199},"\\\n",[109,2285,2286,2289],{"class":111,"line":252},[109,2287,2288],{"class":119},"     signed-by=\u002Fetc\u002Fapt\u002Fkeyrings\u002Fdocker.gpg] ",[109,2290,2283],{"class":199},[109,2292,2293,2296],{"class":111,"line":259},[109,2294,2295],{"class":119},"     https:\u002F\u002Fdownload.docker.com\u002Flinux\u002Fdebian ",[109,2297,2283],{"class":199},[109,2299,2300,2303,2306,2309,2312,2314],{"class":111,"line":266},[109,2301,2302],{"class":119},"     $(",[109,2304,2305],{"class":115},"lsb_release",[109,2307,2308],{"class":199}," -cs",[109,2310,2311],{"class":119},") stable\"",[109,2313,948],{"class":225},[109,2315,442],{"class":199},[109,2317,2318,2321,2324,2327,2329],{"class":111,"line":289},[109,2319,2320],{"class":115},"    sudo",[109,2322,2323],{"class":119}," tee",[109,2325,2326],{"class":119}," \u002Fetc\u002Fapt\u002Fsources.list.d\u002Fdocker.list",[109,2328,246],{"class":225},[109,2330,2331],{"class":119}," \u002Fdev\u002Fnull\n",[109,2333,2334,2336,2338],{"class":111,"line":295},[109,2335,116],{"class":115},[109,2337,184],{"class":119},[109,2339,187],{"class":119},[109,2341,2342,2344,2347,2349,2352,2355,2358],{"class":111,"line":313},[109,2343,116],{"class":115},[109,2345,2346],{"class":119}," apt-get",[109,2348,209],{"class":119},[109,2350,2351],{"class":119}," docker-ce",[109,2353,2354],{"class":119}," docker-ce-cli",[109,2356,2357],{"class":119}," containerd.io",[109,2359,442],{"class":199},[109,2361,2362,2365],{"class":111,"line":318},[109,2363,2364],{"class":119},"   docker-buildx-plugin",[109,2366,2367],{"class":119}," docker-compose-plugin\n",[109,2369,2370,2372,2374,2377,2380],{"class":111,"line":333},[109,2371,116],{"class":115},[109,2373,382],{"class":119},[109,2375,2376],{"class":119}," docker",[109,2378,2379],{"class":119}," run",[109,2381,2382],{"class":119}," hello-world\n",[10,2384,2385],{},"We need to build ceph ourself, because the packages do not contain many of the needed packages. Alternatively, you can run the management node on some x86 and only use arm64 for the OSDs.",[100,2387,2389],{"className":102,"code":2388,"language":104,"meta":41,"style":41},"$ git clone https:\u002F\u002Fgithub.com\u002Fceph\u002Fceph.git\n# or\n$ git clone https:\u002F\u002Fgithub.com\u002Fthe78mole\u002Fceph.git\n$ git checkout quincy-release\n$ .\u002Finstall-deps.sh\n$ cd ceph\n# To prepare a release build... Takes some minutes\n$ .\u002Fdo_cmake.sh -DCMAKE_BUILD_TYPE=RelWithDebInfo\n$ cd build\n# Next step will take many hours (maybe some days)\n$ ninja -j1\n",[106,2390,2391,2402,2406,2417,2429,2436,2445,2450,2460,2469,2474],{"__ignoreMap":41},[109,2392,2393,2395,2397,2399],{"class":111,"line":112},[109,2394,116],{"class":115},[109,2396,359],{"class":119},[109,2398,362],{"class":119},[109,2400,2401],{"class":119}," https:\u002F\u002Fgithub.com\u002Fceph\u002Fceph.git\n",[109,2403,2404],{"class":111,"line":126},[109,2405,406],{"class":262},[109,2407,2408,2410,2412,2414],{"class":111,"line":136},[109,2409,116],{"class":115},[109,2411,359],{"class":119},[109,2413,362],{"class":119},[109,2415,2416],{"class":119}," https:\u002F\u002Fgithub.com\u002Fthe78mole\u002Fceph.git\n",[109,2418,2419,2421,2423,2426],{"class":111,"line":146},[109,2420,116],{"class":115},[109,2422,359],{"class":119},[109,2424,2425],{"class":119}," checkout",[109,2427,2428],{"class":119}," quincy-release\n",[109,2430,2431,2433],{"class":111,"line":239},[109,2432,116],{"class":115},[109,2434,2435],{"class":119}," .\u002Finstall-deps.sh\n",[109,2437,2438,2440,2442],{"class":111,"line":252},[109,2439,116],{"class":115},[109,2441,372],{"class":119},[109,2443,2444],{"class":119}," ceph\n",[109,2446,2447],{"class":111,"line":259},[109,2448,2449],{"class":262},"# To prepare a release build... Takes some minutes\n",[109,2451,2452,2454,2457],{"class":111,"line":266},[109,2453,116],{"class":115},[109,2455,2456],{"class":119}," .\u002Fdo_cmake.sh",[109,2458,2459],{"class":199}," -DCMAKE_BUILD_TYPE=RelWithDebInfo\n",[109,2461,2462,2464,2466],{"class":111,"line":289},[109,2463,116],{"class":115},[109,2465,372],{"class":119},[109,2467,2468],{"class":119}," build\n",[109,2470,2471],{"class":111,"line":295},[109,2472,2473],{"class":262},"# Next step will take many hours (maybe some days)\n",[109,2475,2476,2478,2481],{"class":111,"line":313},[109,2477,116],{"class":115},[109,2479,2480],{"class":119}," ninja",[109,2482,2483],{"class":199}," -j1\n",[10,2485,2486,2487,2492,2493,26,2498,76],{},"To be able to distribute the packages (we will need more than a single host for ceph to make any sense), we will setup a debian package repository. I will make mine public so you can skip the process of compiling your packages. I used a ",[20,2488,2491],{"href":2489,"rel":2490},"https:\u002F\u002Flanbugs.de\u002Fhowtos\u002Flinux\u002Feigenes-debian-ubuntu-repository-aufbauen\u002F",[24],"german tutorial"," on creating an own repository, a tutorial to ",[20,2494,2497],{"href":2495,"rel":2496},"https:\u002F\u002Fpmateusz.github.io\u002Flinux\u002F2017\u002F06\u002F30\u002Flinux-secure-apt-repository.html",[24],"host a package repository using GitHub Pages",[20,2499,2502],{"href":2500,"rel":2501},"https:\u002F\u002Fassafmo.github.io\u002F2019\u002F05\u002F02\u002Fppa-repo-hosted-on-github.html",[24],"PPA repo hosted on GitHub",[100,2504,2506],{"className":102,"code":2505,"language":104,"meta":41,"style":41},"$ mkdir ~\u002Fbin && cd ~\u002Fbin\n$ curl --silent --remote-name --location \\\n   https:\u002F\u002Fgithub.com\u002Fceph\u002Fceph\u002Fraw\u002Fquincy\u002Fsrc\u002Fcephadm\u002Fcephadm\n#### or # wget https:\u002F\u002Fgithub.com\u002Fceph\u002Fceph\u002Fraw\u002Fquincy\u002Fsrc\u002Fcephadm\u002Fcephadm\n$ chmod +x cephadm\n$ cp cephadm \u002Fusr\u002Fsbin\n\n#### On arm64, the cephadm package is not available, even if we have this \n#### python script already at hand. Therefore, we put it in \u002Fusr\u002Fsbin and\n#### fake the package to be installed with equiv. Don't do this on other\n#### non-ARM systems\n#### vvvv Dirty Hack Start vvvv\n$ apt install equivs\n$ mkdir -p ~\u002Fequivs\u002Fbuild && cd ~\u002Fequivs\n$ curl --silent --remote-name --location \\\n   https:\u002F\u002Fraw.githubusercontent.com\u002Fthe78mole\u002Fthe78mole-snippets\u002Fmain\u002Fceph\u002Fcephadm-17.2.5-1.equiv\n$ cd build\n$ equivs-build ..\u002Fcephadm-17.2.5-1.equiv\n$ dpkg -i cephadm_17.2.5-1~bpo11+1_arm64.deb\n#### ^^^^^ Dirty Hack End ^^^^^\n#### If someone feels responsible to fix the real cephadm package for\n#### all arches (it is a python tool !!!), please do it :-)\n\n$ cephadm add-repo --release quincy\n$ apt update\n$ cephadm install\n$ which cephadm   # should give \u002Fusr\u002Fsbin\u002Fcephadm\n\n# Tweak needed for cephadm is to enable root login over SSH\n$ sed -i \\\n  's\u002F#PermitRootLogin.*$\u002FPermitRootLogin yes\u002F' \\\n  \u002Fetc\u002Fsshd_config\n$ service ssh restart\n",[106,2507,2508,2526,2540,2545,2550,2560,2571,2575,2580,2585,2590,2595,2600,2611,2630,2645,2651,2660,2671,2684,2690,2696,2702,2707,2721,2730,2739,2752,2757,2763,2774,2782,2788],{"__ignoreMap":41},[109,2509,2510,2512,2514,2517,2520,2523],{"class":111,"line":112},[109,2511,116],{"class":115},[109,2513,1162],{"class":119},[109,2515,2516],{"class":119}," ~\u002Fbin",[109,2518,2519],{"class":232}," && ",[109,2521,2522],{"class":199},"cd",[109,2524,2525],{"class":119}," ~\u002Fbin\n",[109,2527,2528,2530,2532,2534,2536,2538],{"class":111,"line":126},[109,2529,116],{"class":115},[109,2531,436],{"class":119},[109,2533,439],{"class":199},[109,2535,562],{"class":199},[109,2537,565],{"class":199},[109,2539,442],{"class":199},[109,2541,2542],{"class":111,"line":136},[109,2543,2544],{"class":119},"   https:\u002F\u002Fgithub.com\u002Fceph\u002Fceph\u002Fraw\u002Fquincy\u002Fsrc\u002Fcephadm\u002Fcephadm\n",[109,2546,2547],{"class":111,"line":146},[109,2548,2549],{"class":262},"#### or # wget https:\u002F\u002Fgithub.com\u002Fceph\u002Fceph\u002Fraw\u002Fquincy\u002Fsrc\u002Fcephadm\u002Fcephadm\n",[109,2551,2552,2554,2556,2558],{"class":111,"line":239},[109,2553,116],{"class":115},[109,2555,581],{"class":119},[109,2557,584],{"class":119},[109,2559,587],{"class":119},[109,2561,2562,2564,2566,2568],{"class":111,"line":252},[109,2563,116],{"class":115},[109,2565,490],{"class":119},[109,2567,648],{"class":119},[109,2569,2570],{"class":119}," \u002Fusr\u002Fsbin\n",[109,2572,2573],{"class":111,"line":259},[109,2574,256],{"emptyLinePlaceholder":255},[109,2576,2577],{"class":111,"line":266},[109,2578,2579],{"class":262},"#### On arm64, the cephadm package is not available, even if we have this \n",[109,2581,2582],{"class":111,"line":289},[109,2583,2584],{"class":262},"#### python script already at hand. Therefore, we put it in \u002Fusr\u002Fsbin and\n",[109,2586,2587],{"class":111,"line":295},[109,2588,2589],{"class":262},"#### fake the package to be installed with equiv. Don't do this on other\n",[109,2591,2592],{"class":111,"line":313},[109,2593,2594],{"class":262},"#### non-ARM systems\n",[109,2596,2597],{"class":111,"line":318},[109,2598,2599],{"class":262},"#### vvvv Dirty Hack Start vvvv\n",[109,2601,2602,2604,2606,2608],{"class":111,"line":333},[109,2603,116],{"class":115},[109,2605,184],{"class":119},[109,2607,209],{"class":119},[109,2609,2610],{"class":119}," equivs\n",[109,2612,2614,2616,2618,2620,2623,2625,2627],{"class":111,"line":2613},14,[109,2615,116],{"class":115},[109,2617,1162],{"class":119},[109,2619,2223],{"class":199},[109,2621,2622],{"class":119}," ~\u002Fequivs\u002Fbuild",[109,2624,2519],{"class":232},[109,2626,2522],{"class":199},[109,2628,2629],{"class":119}," ~\u002Fequivs\n",[109,2631,2633,2635,2637,2639,2641,2643],{"class":111,"line":2632},15,[109,2634,116],{"class":115},[109,2636,436],{"class":119},[109,2638,439],{"class":199},[109,2640,562],{"class":199},[109,2642,565],{"class":199},[109,2644,442],{"class":199},[109,2646,2648],{"class":111,"line":2647},16,[109,2649,2650],{"class":119},"   https:\u002F\u002Fraw.githubusercontent.com\u002Fthe78mole\u002Fthe78mole-snippets\u002Fmain\u002Fceph\u002Fcephadm-17.2.5-1.equiv\n",[109,2652,2654,2656,2658],{"class":111,"line":2653},17,[109,2655,116],{"class":115},[109,2657,372],{"class":119},[109,2659,2468],{"class":119},[109,2661,2663,2665,2668],{"class":111,"line":2662},18,[109,2664,116],{"class":115},[109,2666,2667],{"class":119}," equivs-build",[109,2669,2670],{"class":119}," ..\u002Fcephadm-17.2.5-1.equiv\n",[109,2672,2674,2676,2679,2681],{"class":111,"line":2673},19,[109,2675,116],{"class":115},[109,2677,2678],{"class":119}," dpkg",[109,2680,274],{"class":199},[109,2682,2683],{"class":119}," cephadm_17.2.5-1~bpo11+1_arm64.deb\n",[109,2685,2687],{"class":111,"line":2686},20,[109,2688,2689],{"class":262},"#### ^^^^^ Dirty Hack End ^^^^^\n",[109,2691,2693],{"class":111,"line":2692},21,[109,2694,2695],{"class":262},"#### If someone feels responsible to fix the real cephadm package for\n",[109,2697,2699],{"class":111,"line":2698},22,[109,2700,2701],{"class":262},"#### all arches (it is a python tool !!!), please do it :-)\n",[109,2703,2705],{"class":111,"line":2704},23,[109,2706,256],{"emptyLinePlaceholder":255},[109,2708,2710,2712,2714,2716,2718],{"class":111,"line":2709},24,[109,2711,116],{"class":115},[109,2713,648],{"class":119},[109,2715,599],{"class":119},[109,2717,602],{"class":199},[109,2719,2720],{"class":119}," quincy\n",[109,2722,2724,2726,2728],{"class":111,"line":2723},25,[109,2725,116],{"class":115},[109,2727,184],{"class":119},[109,2729,187],{"class":119},[109,2731,2733,2735,2737],{"class":111,"line":2732},26,[109,2734,116],{"class":115},[109,2736,648],{"class":119},[109,2738,626],{"class":119},[109,2740,2742,2744,2747,2749],{"class":111,"line":2741},27,[109,2743,116],{"class":115},[109,2745,2746],{"class":119}," which",[109,2748,648],{"class":119},[109,2750,2751],{"class":262},"   # should give \u002Fusr\u002Fsbin\u002Fcephadm\n",[109,2753,2755],{"class":111,"line":2754},28,[109,2756,256],{"emptyLinePlaceholder":255},[109,2758,2760],{"class":111,"line":2759},29,[109,2761,2762],{"class":262},"# Tweak needed for cephadm is to enable root login over SSH\n",[109,2764,2766,2768,2770,2772],{"class":111,"line":2765},30,[109,2767,116],{"class":115},[109,2769,271],{"class":119},[109,2771,274],{"class":199},[109,2773,442],{"class":199},[109,2775,2777,2780],{"class":111,"line":2776},31,[109,2778,2779],{"class":119},"  's\u002F#PermitRootLogin.*$\u002FPermitRootLogin yes\u002F'",[109,2781,442],{"class":199},[109,2783,2785],{"class":111,"line":2784},32,[109,2786,2787],{"class":119},"  \u002Fetc\u002Fsshd_config\n",[109,2789,2791,2793,2796,2798],{"class":111,"line":2790},33,[109,2792,116],{"class":115},[109,2794,2795],{"class":119}," service",[109,2797,1278],{"class":119},[109,2799,2800],{"class":119}," restart\n",[2802,2803],"hr",{},[78,2805,2807],{"id":2806},"kommentare-comments","Kommentare \u002F Comments",[10,2809,2810,2811,2816,2817,76],{},"Hast du Fragen oder Anmerkungen zu diesem Artikel? ",[20,2812,2815],{"href":2813,"rel":2814},"https:\u002F\u002Fgithub.com\u002Fthe78mole-blog\u002Fthe78mole-blog.github.io\u002Fissues\u002Fnew?title=Kommentar+zu%3A+taming-the-cephodian-octopus-or-quincy&labels=comment",[24],"Erstelle ein GitHub Issue"," oder starte eine ",[20,2818,2821],{"href":2819,"rel":2820},"https:\u002F\u002Fgithub.com\u002Fthe78mole-blog\u002Fthe78mole-blog.github.io\u002Fdiscussions",[24],"Diskussion",[2823,2824,2825],"style",{},"html pre.shiki code .sScJk, html code.shiki .sScJk{--shiki-default:#6F42C1;--shiki-dark:#B392F0}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html pre.shiki code .sj4cs, html code.shiki .sj4cs{--shiki-default:#005CC5;--shiki-dark:#79B8FF}html pre.shiki code .szBVR, html code.shiki .szBVR{--shiki-default:#D73A49;--shiki-dark:#F97583}html pre.shiki code .sVt8B, html code.shiki .sVt8B{--shiki-default:#24292E;--shiki-dark:#E1E4E8}html pre.shiki code .sJ8bj, html code.shiki .sJ8bj{--shiki-default:#6A737D;--shiki-dark:#6A737D}",{"title":41,"searchDepth":126,"depth":126,"links":2827},[2828,2829,2830,2831,2834,2835,2836,2837,2838,2839,2840,2841,2842,2843,2844,2845,2846,2847,2848],{"id":80,"depth":126,"text":81},{"id":168,"depth":126,"text":169},{"id":510,"depth":126,"text":511},{"id":753,"depth":126,"text":754,"children":2832},[2833],{"id":761,"depth":136,"text":762},{"id":1108,"depth":126,"text":1109},{"id":1408,"depth":126,"text":1409},{"id":1494,"depth":126,"text":1495},{"id":1663,"depth":126,"text":1664},{"id":1735,"depth":126,"text":1736},{"id":1755,"depth":126,"text":1756},{"id":1801,"depth":126,"text":1802},{"id":1831,"depth":126,"text":1832},{"id":1894,"depth":126,"text":1895},{"id":1934,"depth":126,"text":1935},{"id":1947,"depth":126,"text":1948},{"id":1954,"depth":126,"text":1955},{"id":1977,"depth":126,"text":1978},{"id":2124,"depth":126,"text":2128},{"id":2806,"depth":126,"text":2807},[2850,2851,2852,2853,2854],"ARM","Ceph","Linux","Server","Storage","2023-02-12","md","\u002Fimages\u002Fblog\u002F2023\u002F02\u002Fquincy-logo.webp",{"tags":2859},[2860,2861,2862,2863,2864,2865,2866],"ceph","cluster","hc4","odroid","odroid-hc4","quincy","storage","\u002Fblog\u002F2023\u002Ftaming-the-cephodian-octopus-or-quincy",{"title":5,"description":41},"blog\u002F2023\u002Ftaming-the-cephodian-octopus-or-quincy","FARe3qx37I6WpZ0JqEMq5w3JZXFPlQ2sjpJO4Ba1STs",1777286693659]