Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose Llava as a shared library for downstream projects #3613

Merged
merged 34 commits into from
Nov 6, 2023

Conversation

damian0815
Copy link
Contributor

@damian0815 damian0815 commented Oct 13, 2023

  • convert llava example into struct and init/process/free prompts
  • populate llava-test.cpp with tests
  • load image from base64 string embedded in the prompt
  • decouple image loading from model loading so a single llava_context instance can serve multiple prompt requests
  • whatever else is necessary to enable llama-cpp-python bindings ..?

@FSSRepo
Copy link
Collaborator

FSSRepo commented Oct 13, 2023

This PR will be very helpful for implementing llava on the server.

@damian0815
Copy link
Contributor Author

damian0815 commented Oct 13, 2023

base64 support is working.

../llama.cpp/build/bin/llava -m ./ggml-model-q5_k.gguf --mmproj mmproj-model-f16.gguf --temp 0.1 \ 
-p '<img src="data:image/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAAASwAAAEACAMAAAA+zbsKAAADAFBMVEX////7+/z/09b/c3stLS3+/v///f2BgYEGBgZaWlr////09PT+/f74+Pj6+vvvikfy8vL9/Pz/+vf5+vxQa5/wjUvyl1v5+fn09fn+/v7/+PSGhob39/hBX5g/Xpf//v7959r29vY9PT3vhT/wiETvhkGhoaHl5eVnZ2f8/P2cnJz//Pr6z7V0irPd3d2Ojo785NbW1tbwj1DZ2dlIZJtKZ51tbW3v8Pb83873+PtYcaTn5+dFYpqJiYn1sYW4uLims82Fl7vxlVlqga7s7/XN1OPp7PPh5e5vhbBDYJljY2NwcHD+9/L+8Oihr8vxkVNbdKXm6fHj5/Bqamr/2tz/9fb//f5ERETymmBTbaF2dnZgYGDh4eH/3d///Pu+vr71r4I9XJb4wJ3znWb+8uv+9e/x8/fe4u3xk1bX3Oj84tL/4OLNzc3607rj4+P96+GqttCurq7U1NS6urqWpcSnp6eUlJSRkZG0v9V9fX3r6+vFxcVISEjKyspOTk75w6L1rX796t6EhIT5yKnzoWz3uZJ4jLX2s4n728eJmr33to6Nnb+YmJj0o3DHx8fV2ueQn8FNaJ6xvdTq6urHz+Bzc3P/+vr/4uWqqqo4WZT+8+372sX1qHjyn2n6zbKsrKykpKSAk7pheqmSosL73crb3+qZqMbEzd7Bytx7e3tdXV3/19r/8PL/5ui2trbS0tL3u5X0pnS+x9oAAABWVlbR1uX/6+397eT3vZn97uVTU1P71r6tudFmfau8vLz/f4X/8/T/1tjQ0NDw8PD/9/h9kLj5x6fT2OYYGBjK0uK6xNjs7OxRUVGzs7P/panCwsLt7e3ugzr72MG4wtedq8hfd6ju7u46Ojr/mZ//eH//wcP/kpn/7u//nqP/6er97ePg4uV5eXmwsLD/0dT/jJP/iI/+yMz1q3v5yqwmJiaZmZn/q7D/ur7/hYz/sretra38/f1AQED/ztEUCwupM0Xyd0L+taf+WlH5TjWOa20MBgbagJHSqbvZV2tsVoWPj7Kei/b2AAAUGUlEQVR42u2daVhT17rH/zmnSHmADJhANlNQwiiThCnM8zxjL2NBQAUpowqoiCgqziNoUdGiOCBtcaR1wF4LdajDccbbc3vbx55zr7bn9ja959x5/BBE1IAJ2SEkWf9Pm+zn2Sv5sda737XWu94XICIiIiIiIiKiScwCBoEgrwYz2QSC3LBmElgKwpIME7OkAAAMMwJmLFgznG+fyxgAQnINcmuYED09Z7DMkKCRDetuTx6zNl5CrbxLLXIfQt0yzmCzG0EjG1ZzMwV2ZiUc6++ZDXLQmZvHGCQDcQxYGR8B1MpGtJ64uLvXEsbtF+Prhwga2bCe2gPLn+QB1FBIzwAAy0U+zgSNbFi9GbfMQs4xBpsdmAEZBczaxWbG9jkEzZtiGxhi0GJZfUYDYO+zRV8/AOUZdfWOAwTNm+Kc5ADsxhA3CWB2cvEAG6AqC+46EDJERERERERERERERES6qW2LCAN5xV7hM4NQkFM1N37cRijIp4DbZ88mkK4ll6j2P589e4Os+ckHa8D+xhP7PAJCPlrUOwbU5De75xPNxPWOgRoaff9vCCy59ZvfElhya87fE1hy670PtATWfI9ZKm/0r/ZoCawm3iWVN/rh77QE1uqs8H3EwL8NFsMJACB44DnXXMWNfvnXGg6LX/1Q6p4arbFrMlFto5/8RsNhLTjkHSW9MungHSGTm3Fh6aVfTr0u7VtVxaXzVNrop/+k8QZ+IfdymtRs7efNnU8M/LiwjJYGHmICAGZds12TT2CN6zo4Vcd18QEA+atY+1XY6Lf/oAV+lnhHXLXTsLdleoEY9PGd0kjXwCi9YbMVnEh61vgefEwEd/iV+NyuSGUzOG1ZotnOjYgBAJivYm0gsMaHRZ0JbBFLzVax6ffEzxp/Is3ot14ifSX6eXbMU++XY1e+vL7VOgVhgd9l1c8AAMEauyKOWueGDc3AL4ul17+MOv/BZgPb7KcCLIhdvc7oAUDierv9AnWuOvRmABnN0mtni5efn1gMOF+cErCQHrFzIQDgwtpwGzUZeGPJMCwOBwCMh2ExGQDgcweQMACA+aLrG1NqgkX1ZcdKX4llrGeqWAkcc6W0swFoDaIQsCw+9y6F3gzgzl3AQf9ckIUF0Fof31PDHLL4MdMR5bWAWW1u/B0GLC0qgy42L1cPLFBRca7RACBwsdukgkmiYKxn1uYw8UMOGLvbhxrjRejNACycYbi7OeBOZjMkde63Fl10kwzG195DzW7gI8e8PMcaiv3kRIiop51SDyxQ1VabmQBgvs62gn6zdeX9MW6IcvPM9GtgvI05eDK+4QWsvMzZoJotQA3cYucZfCEdhjW7IVkZAtzJYLCfNEJSZ8FQEyzwd1j1GwFA0oFwD9obnfPzGDeGEu467J4NbIuPT1hx9wWsxotsqYFf5BOf86R3BNa9JwHAwG0z9sxBwF6fqS5YEIcFLnUCgCReMe2+6ZgGnnJ2bzhhiIbMk9Tg7hFYA+eksIZ+vcNEwktYgzOXAwW3mWqHhegIbh8AoIIVTPdO4gdjxjqUZzjbA50+gGhFwQtYDrdPwjinGW6Zg3BY+RGQUAPU7AbVEwLJTzkc9cOirqceTgcAk/PC5zT7pmNH0VjmxouARSt6e3cbhCDEBwhyhrF7fG/Q7SAYrnD+ovm2PbAlodz4hxzgrsGd9p5tYP86CHQ2qxEWEMWNTQMA87m8S9NobfSr78a8NdBLAWhs/qmyPA+VDcCACDBe3Lx4YAAQBdWJZs8GhuyXsd22AdQi93o3gHmHCYjKJeqERe2yCmMCwAVT0ypaG33/b7VkIj3ayD+yTmYAEGzwPE3rSqBWhhzxN8ftmg6Ac5C1js5d6vf/qIWwEBnB/RoATNYJXWg0W199qo2wUHJ4ZwwA7DvNKqOv0fwrWgmLisqOTQcAm6ws+oz8B1oaU+rUvTdsAQDBBuFc2hYgtDamlKou3CEGIDhot4mu7Z6Pf9ZSWOB3WfczAJivs3OhaQFizFWHSdA8D1XCQqSrd7cRgHlbU+gKRZqmJlKC+U1ZBxJVCQu+l6WhW/tMw+l5JaopppRjU8RinX6Qr1JYetdTD8cAwBHPtbR48uox8ElN4cK5G5RxruWKg5e0ebdEA8g/yFpPB60vv5n8AWj+3FTY4afct5fz0MBDq0N8AEbX7E7REG/6yR8mm9X8imK700pHUckJS3zMOtkJgEmw8KDmvfKNyjqEWRXKu4nyHkdZsNGrnwEgqYO3Qel32Xf/OJmo8qvW80rXJNHg9sh9dmeBq/cZCkDVgVI/ZdudVAOftCbFcxM9kXnyH3QqOcwtAYAyz6wkzYFF7T9g2+FHU1y//LCo66mHSyhAsN9zrZLRNZ/9fpJQzSpbJSyuoG0pTpEjdG1ertEA8l1Yc/dBAyS40MRLKaIxbEoRWFS1dRgFYFaT7TqlNvW//edJgVVRfH9uGZ2xngodzmSGWh8TAzApsi1SxgxMhs0y8euwXbuB3tADxU6y8nfE9U8HMH+TcI3JlIaVdIqXco3uwEUFj/36xnq16QFI7GApsV7z6Z9U3a0qsoTBHrSHLSp6Rjo91vs6BWBeh+2DfExNmfidtl3rp4IHKwqL2u5/+SgAVG31fDBRV/7KxyodgZt4pc9VEjqs8Ol7KsorNhoAkkxTJmo/P1bhSdb5B8NZ66tUEziseKoCo5uBV30BwKaYd2RifUt1Bt7kyAG7rX5GKnr6BPI6iEOtlvABCMpMwyc2Tfzg71T0a6o28Uqfq85hnkgSjOlLrJKnAxDYlKZMiNa0r1TisM9bw+Nt2qfC9f0JZQzhH7JeKgEgKMsqLZsALZVkDDGvKLZbVcYBphgspMVy2ygAnLKUrAnQUkEuGoHNXKGpSyIw9WAh8nD21wAAm3BPxUci7QY+v+qU0LMpUYApCQvb/f3TMTwSFX4n0h3MlnjNlLXJRtWoJg7L6bj38LkCj5TSIwp+zz3f0tqtjmy1O7DBHJiysIA2KyktVBXbXVJsIWTPZzR6Vh6nbbMuzQKmNCyExklDTgU2W3kue9Q0DFev46UUfT5Jk04lYPGPxT0SA4AgaS1LoTwQdBl4wb41pazgqknLxqVMZjZml7WUFuYF3z+vgONMj+sgmHcwXPjMYxKDTJRKY7fgauBS6dW+U7bB8tN670M67HrFabu1FeaAhsBCmqv3cWnWDJMi1tYL8r4U899T3l8vO80ydUkUQHNgIS2C2yad4+9x8cyS1+FSeiJtUrZKaHpt0reYlM0mudDfv0R6xdmfkiJnukBlDfzqTZ6s8zYCaBosKir1cQk1PD3byiqS67+t1OKfic06luf6JHWsaSudp5Rq2zl8oBqYFyx8Js/hxCtzJtwcx+YUj7fJgwNoIixQS61cxcPX85uEB8pUOTz2nSq1DfYzATQUFpyWBl6NfOH7+Jmyit66UPLhxLbC9nicEvJWrRYAmgsL4mrrHQtGjO8q2/Vv+z0TMvD5Hqc8WerrVXTBgrjL+tjIJsH8NbambznM+RvFY0o53wd72q4vMwc0HRb4XYWh4pG/yrbeH79zfaJoYMj8Dc9sSzdVQc2iKWu370ar/ukjfyUVCbNc6JveJro8E5aeslH/BjhdKc59YwNvvtyu27OhmLVq7F+nSDDbrKSDWayUa59PhUTftOWDT3MN7B6Vs8O8KYXV9L1AWQOf6HeeJ9yq6o2ISYeFksvZbaPSwUwrC7Ytfr5HKViJlzp4wuAH8wFtg4WYVG7f6OQ5JvsPsA7I/KXyhHYLElefL2WFF1WZAFoIS2/7Y//rr6QaSjxYbLf10r4JeJGzVh/ssPN8djARU0l01rAw6uNKw5Fe6vs1pcK1Lq93jrcdRzHZP9fUNryobGqhorngB9XHTY15fTQ9L2alFNmYC+R0Ss0vHAn2ZIUH+5lMA7QZFiTHuRElb3iUFc9YvLkHR70axzTwJjYuwab3TYNdkgSYgqK5lAyj23p4N/EVXGXreLbhcy+9GI6yD2dyyprWptiy5lZMJZuuSlhAt9flSFkrBn7B4cL7a5v8Vu97w1edlvi5x/7za21tPQ+cV1kg2pSENX1X4NU0WTemXXhwai3rfvjpTddcfv/9PhMjI47J/CSbIxXX1q3K4t333LrOxcMEU1r0V3RiLi08FCn71p55qy+tO8AS/su/l4abmpqamoaXpvBYQqFp8Bq/qn1Tv3KRKspf9ce1RI5ze0/Vv/7b+fWrOjo6ngWva3LZ8PksaIhUAUscarVxPFqqiynVQFgQh1qHjUdLSxP3TFBOyVZhvmPf/upDAmuU+I+sdoxNa84fCaxXPIhk67H7Fqmc+brdemS9eSxa2pggUdmRWPhyg+w1m/UdgfWajJKtrsruW/QG4GoFLIiTrWX7W6Q0sqyRGFoo098iBl6mqq1cZdAipZFlj8Rqq0NvrkFoTWlkumkttXZ9g9a0PQSWTFFLvS/H6L02kf4dgSVbzJtesUf1iIGXc1bd5p268JX9RI0vjaxCGbX5+0c5jfrgky8JrLH7VlS2fxsFjdekwAJ19HBg98v4La0p2aca6R2N9dolJgZeTqVFxCUTWPLKd+PeruFFCC0pYKtSWscKw6KJgZd36nPMKjZSD8BnpGe9XYyH3Md9EmKz5HS4urn+bQSWvM581OGdD8XEz5JT6S1xSxYQAy+nosOs/48YeHnFT/7f/1lIEVhyzhT/+y/ZbU4Elnz68L9SubvGrN0ocXit25nV3GHoLixBvq+r9ZghXHkrzV79YEazu5nuwrryPnyXxLUslB1sm7dCioYDQGIsAcCRFsW0pHQR1pyfAf7DOP82PRk3Cy7euF2Lgo9ClhmjIf52fB4kP4RQlsvymlckOOggLKkHvzCi8FHkm7gkbpmGEtjnWuRRoh4HSUiPGcd9i8Qwc3fBPcctEt2DNVyyL32jdUvJGMPQPjMPWC5itjauaJXCagDV+dRY92DlD5dG5od6px6fLhvWUwBs53M+T1c4SGG1Aj/kWOoerJH4LL3rEVY7omXCagY49Y73MLhSx2HNeRn5l77R2vX6q29Fh0y2FJZxTifQ+2uebsMavUQjPsPlhvJH3zW+aD8bnUEA1d7T+5N+rohT5ywxXNkK9C7TQZv1amnkkquFV7ePmv1QJ+vuwO0kALPF7r2DjUPUbBHF+MISqCzn6KDNerU0su+uuOzqqTGhmYpvwyuv/m20sCXO9fp0AmscP2v0sk01d2dyJIH1FgM/LKe+lsLYKCMC6w3JLI3sdCa1cGMJg8B6c9VBhqiYJV7+yb4E1qsSjJGFQBwVax1xXExgjdbYpZEZ3YcLY6P4BNa4Bn5EaaH+XmF9BNaIxi2NzEjr2rlzcwmTwJLqLblo9GJ2cK0294kJLLnE6OsK5Ib1SQgsufJnMUs2Z3u5ti0w0nlYckXR6KUfe7w3tr+EIrDkkVPMrseFqUuOGuk0rG/lzSZJ8aOucq0jHsaIdReWIqIWJkcUZm8+E62zPUuhDLiStOMbra1Su64zdBKW4mGS/DNXs/d6be4u4VMElhz96+iusOy9/odC+/i6BWtipZEpfszxzVyrnY/DzkSLGZSuwFLGtz8a2vI4bq//xv6vS3yNdAGWcqWRp6cfDw3z32uV6rq5ui2Noe2wlD7JasSPLjmz47K3t/dOf9djS/uiIxeIpxtpJyy6Dg3wtz9ccjXC33vvXm7Eoa7Qh91fby9J8+UztAoWnWnspkcejequ7mo5vHPv3kKv7NTDEbEtVzd2PQrt33Wz+/jXfdsXlsSkp0VH+vL5YibDSPNg0V4a2Wi6mL8gsiTqZuiSsJaI1OzsbC6Xu3Ont7e3t5eXl5eXV2BgYGBgYFxc3GaNg6XqjCGMBdExR7dHtR0/c3Pprv7q0OTkR8eOLenq2rFj80ONg6XOXDSDDhoGS52nwmb/mNA4Q/atVv236MY7+pOv//iLvtrk8+7ZG08bZW6HnHz3LTp79t3J13/++V316ezZszfqDDVnGNJZGlnhYXjjx4RKTbJZ6swmmRdUbkYMvLxuhRLHgNTjOpAMuAo4pSS3svyioTSy7sAi+eA1xMBrHKyPSRo7+aVEaWTdg6WpUk9M6Z9oeczAuZMwbLgn+6ak3A1grrxDDPzwzKWuEpWOs2XfZC5rBzjuA5oP6xt6nmNISUTx5ZaAxDBghgRgMJlsUMZDy40B9tNOQwksOYCE3WpIAQwmY+gWE6CMb7EZGgRL4dLIsjW4gl3Z8+PKIEhqe3INehlwDnJfhkU+uefiW9H8zswEESPhLsx+Mcg1CKFQk1B/7kkQMODoaGAxpHMGfnAmG3mOJyVwuy3i9J4LwJbbnUMM+184xjm1sDzRzqGYjgUYMFgE0W03tM/sZW/LNIR7EKP1JzfNgaVIaeS3wKp0nA3YJzgEuF28iy3nDEHdCxA1JLhLbRbTsYDq9DEEcmrRvtsSZjMdUNNTfm+QoUE267f0wtLPTEhI8GnEFnfAuDY+p84xaAQWx72eA+jXof0EA5yZlZjR7uhTv1x3YdW5A2BQ2OIOiAwKKDi/hEU5NzOB3NoRWIDZIsd2zYElT2lkOWEFON41o/IuimbM9smTwsodMBStXAZOUL0lh+lYgPJ4kaFoRd4ILOeP2EP6dTpn4Nm5hjD+Kad2hqTzaf3TTjO02wMz6hPqLeyDgAafLSKGfiPManOCfHqBkDoOJLmtGPBxD3oq0iDX4Q+0PIbjJgEGywcYMBNtczMGbrUCGCxvbGVXAmZu225RlWyAM7tBJAEGAyhQbkxwHBpPDlEaZLO+0cyuTNazpvxEmhT8IEs0KhFZ/FNAZFmZGHjViGyFKSCyyaqAyPa9AtLUwJAhthoapWnxj4iIiIiIiIiIiIiIiIiIiIiISAf1/77CZ10v9vU6AAAAAElFTkSuQmCC"> describe the visual content of this image' --verbose-prompt

base64 bytes are this image:
overfitting_lc

output:
The image shows a graph with a line that starts at the top left corner and goes down to the bottom right corner. The line is labeled "loss" and has a negative slope. The graph also has a line labeled "validation" that starts at the top right corner and goes down to the bottom left corner. This line has a positive slope. The graph is likely showing the results of a machine learning model's performance, with the "loss" line representing the model's error and the "validation" line representing the model's accuracy.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably this should go to common.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


bool clip_image_load_from_bytes(const unsigned char * bytes, int bytes_length, clip_image_u8 * img) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
bool clip_image_load_from_bytes(const unsigned char * bytes, int bytes_length, clip_image_u8 * img) {
bool clip_image_load_from_bytes(const unsigned char * bytes, size_t size, clip_image_u8 * img) {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


const char * clip_path = params.mmproj.c_str();
const char * img_path = params.image.c_str();
static void find_image_tag_in_prompt(const std::string& prompt, size_t& begin_out, size_t& end_out) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to have such functions in llava-utils.h

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 done

auto base64_bytes_start = img_base64_str_start + strlen(IMG_BASE64_TAG_BEGIN);
auto base64_bytes_count = img_base64_str_end - base64_bytes_start;
auto base64_str = prompt.substr(base64_bytes_start, base64_bytes_count );
printf("base64_str: '%s'\n", base64_str.c_str());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not print this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was debugging/wip - done.


if (!clip_image_preprocess(ctx_clip, &img, &img_res, /*pad2square =*/ true)) {
fprintf(stderr, "%s: unable to preprocess %s\n", __func__, img_path);
struct llava_context * llava_init(gpt_params * params) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image loading and inference parts should be stripped off this function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

free(ctx_llava->image_embd);
}

void llava_process_prompt(struct llava_context * ctx_llava, gpt_params * params, const char * prompt) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's avoid such god-like functions for now --it kills hackability in early stages of development --it brings no benefit over functions in llava-utils.h. Better to have single-responsibility functions as much as possible to enhance the development speed and flexibility.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 sure

@FSSRepo
Copy link
Collaborator

FSSRepo commented Oct 13, 2023

Server now support LlaVA #3589

@damian0815
Copy link
Contributor Author

damian0815 commented Oct 14, 2023

@monatis thanks for the review, but do please note this PR is still in "draft" state. i addressed all your comments, and actually many of them anticipated what i had planned to do anyway.

so. i'm not sure if the following is correct/desired or in fitting with the way this project is organised, but i went ahead and did it anyway. essentially, i have refactored the existing llava demo to a llava library and a llava cli/MVP demo:

  • the demo executable now has its own cpp file llava-cli.cpp. building now emits llava-cli executable rather than llava. documentation has been updated to reflect this.
  • this leaves llava.h and llava.cpp as a standalone library. i've updated the CMakeLists.txt to roll these along with clip.h and clip.cpp into a libllava.a output.

as mentioned in the top post my primary goal here is to enable lmql support, which means enabling llava code paths with llama-cpp-python bindings. am i headed in the right direction to do this, or is there something more appropriate i could be doing? i note there's a server project as well, but i'm not sure if/how this plays with llama-cpp-python.

@damian0815
Copy link
Contributor Author

actually now that i look over llama-cpp-python this might all be overengineered. all that's really needed i guess is

  1. a mechanism to load the multimodal projector model
  2. some way of generating image embeds and injecting them into the llama context

@damian0815
Copy link
Contributor Author

damian0815 commented Oct 15, 2023

this is in a much better state now.

the public api is basically:

/** load mmproj model */
LLAMA_API struct clip_ctx * clip_model_load(const char * fname, const int verbosity);
/** free mmproj model */
LLAMA_API void clip_free(struct clip_ctx * ctx);

/** sanity check for clip <-> llava embed size match */
LLAMA_API bool llava_validate_embed_size(const llama_context * ctx_llama, const clip_ctx * ctx_clip);

/** build an image embed from image file bytes */
LLAMA_API struct llava_image_embed * llava_image_embed_make_with_bytes(struct clip_ctx * ctx_clip, int n_threads, const unsigned char * image_bytes, int image_bytes_length);
/** build an image embed from a path to an image filename */
LLAMA_API struct llava_image_embed * llava_image_embed_make_with_filename(struct clip_ctx * ctx_clip, int n_threads, const char * image_path);
/** free an embedding made with llava_image_embed_make_* */
LLAMA_API void llava_image_embed_free(struct llava_image_embed * embed);

/** write the image represented by embed into the llama context with batch size n_batch, 
 * starting at context pos n_past. on completion, n_past points to the next position in the context after the image embed. */
LLAMA_API bool llava_eval_image_embed(struct llama_context * ctx_llama, const struct llava_image_embed * embed, int n_batch, int * n_past);

@slaren
Copy link
Collaborator

slaren commented Oct 15, 2023

Some minor suggestions about the API:

  • Move n_threads to the context as llama does
  • Do not ask for n_batch, use the value from llama_context::cparams. You can add a function uint32_t llama_n_batch(llama_context * ctx) to obtain it if needed.

Copy link
Owner

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The llava API is not a good idea.

The only thing a 3rd party project needs in order to implement LLaVA is:

  • CLIP API
  • LLaMA API

Everything else are helper functions that do not belong neither to llama.cpp nor clip.cpp. If we want to reuse LLaVA helpers across llama.cpp's examples (e.g. llava, server, etc.) these should go into common/llava.h/.cpp

@monatis
Copy link
Collaborator

monatis commented Oct 15, 2023

LLAMA_API struct llava_image_embed * llava_image_embed_make_with_filename(struct clip_ctx * ctx_clip, int n_threads, const char * image_path);

Some first-impression suggestions from my side:

  • Better to keep such structs / functions in clip_ prefix because:
    1. Having two different prefixes is confusing.
    2. It's not future-proof --I'll implement other SOTA multimodal models very soon starting with Idefics and re-use these functions to encode images.
  • Use the verb encode instead of make --the latter implies a constructor.
  • Use size_t instead of int to type sizes.

I'll make a thorough review tomorrow.

@FSSRepo
Copy link
Collaborator

FSSRepo commented Oct 15, 2023

The llava API is not a good idea.

The only thing a 3rd party project needs in order to implement LLaVA is:

  • CLIP API
  • LLaMA API

Everything else are helper functions that do not belong neither to llama.cpp nor clip.cpp. If we want to reuse LLaVA helpers across llama.cpp's examples (e.g. llava, server, etc.) these should go into common/llava.h/.cpp

See my implementation of llava in server example

CMakeLists.txt Outdated Show resolved Hide resolved
@monatis monatis marked this pull request as ready for review November 6, 2023 00:39
@monatis monatis marked this pull request as draft November 6, 2023 01:31
Copy link
Collaborator

@monatis monatis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Windows+CUDA still fails.

@monatis monatis marked this pull request as ready for review November 6, 2023 02:01
Copy link
Owner

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very close to what I had in mind, though some things on the API level can be improved. I've added a few comments regarding that.

build-info.h Outdated Show resolved Hide resolved
common/CMakeLists.txt Outdated Show resolved Hide resolved
examples/llava/llava-utils.h Outdated Show resolved Hide resolved
examples/llava/llava-utils.h Outdated Show resolved Hide resolved
examples/llava/llava.cpp Outdated Show resolved Hide resolved
examples/llava/llava.h Outdated Show resolved Hide resolved
examples/llava/llava.h Outdated Show resolved Hide resolved
examples/llava/clip.h Outdated Show resolved Hide resolved
target_link_libraries(${TARGET} PRIVATE common ggml ${CMAKE_THREAD_LIBS_INIT})
set(TARGET llava)

add_library(${TARGET} STATIC llava.cpp llava.h clip.cpp clip.h)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

STATIC should be dropped here to allow building llava as a shared library.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, will update it to build a shared library as well.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, from my tests that looks like the only change I need to integrate this with llama-cpp-python.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good, now it also builds as a shared lib.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That works, thank you!

Ommiting the STATIC / OBJECT from target_link_libraries also seemed to work with just the llava target without the need for the llava_shared target but if having different targets is preffered I don't mind it.

@FSSRepo
Copy link
Collaborator

FSSRepo commented Nov 6, 2023

It seems to me that a good way to check if this PR is achieving its goals is to implement the Llava API in the server example. If it proves to be a better option than how Llava currently functions on the server, it's easy to deduce that we're heading in the right direction since we can eliminate a lot of duplicate code from the server. This is just my opinion to help; I don't intend to come across as a know-it-all.

@monatis monatis changed the title refactor Llava into a servable object/API Expose Llava as a shared library for downstream projects Nov 6, 2023
@monatis monatis merged commit 381efbf into ggerganov:master Nov 6, 2023
32 checks passed
@damian0815
Copy link
Contributor Author

glad this made it in, thanks folks!

monatis added a commit that referenced this pull request Nov 13, 2023
olexiyb pushed a commit to Sanctum-AI/llama.cpp that referenced this pull request Nov 23, 2023
…#3613)

* wip llava python bindings compatibility

* add external llava API

* add base64 in-prompt image support

* wip refactor image loading

* refactor image load out of llava init

* cleanup

* further cleanup; move llava-cli into its own file and rename

* move base64.hpp into common/

* collapse clip and llava libraries

* move llava into its own subdir

* wip

* fix bug where base64 string was not removed from the prompt

* get libllava to output in the right place

* expose llava methods in libllama.dylib

* cleanup memory usage around clip_image_*

* cleanup and refactor *again*

* update headerdoc

* build with cmake, not tested (WIP)

* Editorconfig

* Editorconfig

* Build with make

* Build with make

* Fix cyclical depts on Windows

* attempt to fix build on Windows

* attempt to fix build on Windows

* Upd TODOs

* attempt to fix build on Windows+CUDA

* Revert changes in cmake

* Fix according to review comments

* Support building as a shared library

* address review comments

---------

Co-authored-by: M. Yusuf Sarıgöz <yusufsarigoz@gmail.com>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
olexiyb pushed a commit to Sanctum-AI/llama.cpp that referenced this pull request Nov 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants