No. It is because peter-h uses f_findfirst() and f_findnext(). Each call to either of these scans once through the file list, to obtain a single (the first or the next) entry.
but he already saved/duplicate every search result he needs in flist
Only up to
maxcount. peter-h did not implement a way to obtain more than the first
maxcount entries of a directory.
indirect pointer method suggested by mariush is the way
No, it will not work reliably, because FAT does not have inode numbers. Consider what happens if the directory is modified between indirect index generation and its use: the wrong file will be referenced. Horrible; I'd
never accept software that does that sort of a thing.
What does my solution do, then? It may show stale data (deleted or entries that have since been renamed), but never confuse them, since each
dirscan_get() does a full pass over the directory contents.
but if i read correctly, its the sorting problem (when he already has flist).
The way I read the question is:
"I have very limited memory, not enough to store the names and sizes and attributes of all directory entries in a directory. How can I provide a list of the directory entries to a remote browser client, in sorted order?"His example code (not a solution, but a reference) uses
f_findfirst()/
f_findnext() to read as many of the entries that fit in the given buffer. It neither sorts nor supports more directory entries that can fit in memory.
My suggestion describes code that acts like
f_findnext(), but provides the next
ents_max entries of the directory, optionally filtered, (and the entire directory) sorted using an arbitrary function, doing only one pass over the directory per call. It has a somewhat similar interface to POSIX.1
scandir() function (which is in all ways superior to
opendir()/readdir()/closedir() that idiots continuously suggest to new programmers even on POSIX-compatible OSes), but needs to be called repeatedly until all entries have been reported. That is, it solves both the excess directory scanning problem (
f_readdir() being used, instead of
f_findfirst()/
f_findnext(), thus only
number_of_entries/
ents_max passes over the directory is done), the sort problem (by letting the caller specify the sort function on
dirscan_begin() call), the unstated filtering problem (you don't normally want to list
all files; you may wish to e.g. omit hidden or non-readable files in such file lists), and the memory limitation (it works even when you supply a buffer of just one entry, although that's a bit silly).
Since then, peter-h has pivoted to a different approach, somewhat dropping the entire question. This approach is to emit the directory listing as a JavaScript-enhanced HTML page, and do any sorting on the client side. This means that to a HTTP directory listing request, the response boils down to something like the following:
FRESULT http_dirlist_response(connection_t *client, const char *path, int (*filter)(FILINFO *))
{
DIR dh;
FILINFO fi;
FRESULT rc;
/* Open the specified path, to make sure it exists. */
if (!path || !*path)
rc = f_opendir(&dh, "/");
else
rc = f_opendir(&dh, path);
if (rc != FR_OK)
return rc;
/* Send the header part of the page.
We assume this ends in the middle of a JavaScript assignment, say
var fileList = [
*/
rc = http_dirlist_header(client, path);
if (rc != FR_OK) {
f_closedir(&dh);
return rc;
}
/* List each directory entry as a JavaScript object */
while (1) {
rc = f_readdir(&dh, &fi);
if (rc != FR_OK || !fi.fname[0])
break;
if (filter && !filter(&fi))
continue;
rc = http_dirlist_entry(client, &fi);
if (rc != FR_OK)
break;
}
/* If successful thus far, emit footer part of the page, starting with
];
to end the JavaScript directory entry object array. */
if (rc == FR_OK)
rc = http_dirlist_footer(client, path);
if (rc != FR_OK) {
f_closedir(&dh);
return rc;
}
return f_closedir(&dh);
}
where
connection_t *is just a handle to the client, it could be anything, really.
The above refers to three other functions:
- http_dirlist_header(connection_t *client, const char *path) - emitting the fixed initial part of the HTML directory listing to the user
- http_dirlist_entry(connection_t *client, FILINFO *entry) - emitting a single entry in the directory listing to the user
- http_dirlist_footer(connection_t *client, const char *path) - emitting the fixed final part of the HTML directory listing to the user
If specified, the
filter(FILINFO *entry) function can be used to suppress listing of specific files, such as hidden files, and files used internally by this implementation (for example, in the root directory, describing the header and the footer of the HTML listing page). The entries themselves are not sorted, as that is done by the client-side Javascript.
If the footer part contains say
<table>
<thead>
<tr>
<th><a class="th" onclick="return sortType();">Type</a></th>
<th><a class="th" onclick="return sortName();">Filename</a></th>
<th><a class="th" onclick="return sortTime();">Modified</a></th>
<th><a class="th" onclick="return sortSize();">Size</a></th>
</th>
</thead>
<tbody id="listparent"></tbody>
</table>
then a simple JavaScript
for loop can construct the dynamic HTML (
<tr><td>Type</td><td>Name</td><td>Date Time</td><td>Size</td></tr>) for each directory entry in sorted order, combine them into one long string, and (re)display the list using
document.getElementById("listparent").innerHTML = stringCombiningTheTableRows;If the header part ends with JavaScript
<script type="text/javascript">var listing = [ and the footer part starts with
];, then defines the four sort functions (that return False), and finally adds
document.addEventListener("load", sortName);, then when the page loads, the file list is initially shown as sorted by
sortName().
As you can see from the above outline, this
is a reasonable solution, pushing the memory-hungry sorting to the client, with minimal server-side processing. Sure, there are additional functions, but they shouldn't be too large, and one can also drop
f_findfirst()/
f_findnext() support (setting
FF_USE_FIND=0).
In a properly designed implementation, however, we'd write the response handler as a coroutine, so that the TCP stack, HTTP server, and/or RTOS used can call it to generate around a packet's worth of more data to send to the client, instead of the single-threaded one-client-at-a-time approach shown above. But that gets into some annoyingly complicated stuff (as in, depending on everything else that is being used) that is better discussed in a separate thread.
In case it is not obvious, my intent with these posts is not to just reply to those who replied to me, but hopefully to describe the solutions and approaches in a way that is understandable to even those who much later on stumble on this thread, looking for answers to a similar question. So, if I happen to explain something that you very well know, do not be offended: I'm
not assuming you don't understand this, I'm just trying to keep things at a level that anyone interested can follow too. Answering to just one person with this kind of verbosity (which I cannot help but do) isn't much fun; but writing it in a way that it
might help others too, makes it worthwhile and fun for me.