Results 1 to 6 of 6

Thread: how to separate different parts of speech in WTM?

  1. #1

    Question how to separate different parts of speech in WTM?

    This is not an appropriate question in the title, but I could not find a better one. I'll describe the problem here in detail.

    Taking Hebrew class, usually the Prof. requires the students to memorize the vocabs occur more than 70 times (Basic Hebrew, for example). Therefore, after taking this course, I'm supposed to know all the words which occur more than 70.

    My purpose is to mark all the words occuring 69 times and less in BW, so that when I see them, I know that I haven't memorized them yet.

    First I used the "Word List Manager" to make a list of the words occuring less than 70x. Then I used the "GSE" to search them in the WTM. Then I changed the color of all the words which have been found.

    Every thing is perfect except the word list in the WTM. For example, the word פּרץ , in Van Pelt and Pratico's book 'The Vocabulary Guide to Biblical Hebrew,' the verb from occurs 46 times, and the noun form occurs 19 times. Therefore, neither the verb or the noun need to be memorized.
    However in the WTM, the root פּרץ occurs 84 times. If I mark the words occuring less than 70x, this word wouldn't be marked. The problem is, when I see it, I suppose I must have memorized it whereas actually I haven't.

    The problem of the WTM is, it grouped all froms which derives from the same root form regardless of whether they are different parts of speech and even homonyms (I guess).

    I don't know if this problem can be solved or not?

  2. #2
    Join Date
    Nov 2004
    Posts
    159

    Default

    Going deeper into command line formatting for WTM, you will find that your example word can be searched by either verb or noun, and that these can be further searched by homonym. In the case of your example, searching for the noun form of Homonym 1 returns 19 hits (as per Van Pelt and Patico); the verb form regardless of homonym returns 50 hits (unlike the lexicon).

    All of this is in the Help files.

  3. #3

    Default

    Quote Originally Posted by bobvenem View Post
    Going deeper into command line formatting for WTM, you will find that your example word can be searched by either verb or noun, and that these can be further searched by homonym. In the case of your example, searching for the noun form of Homonym 1 returns 19 hits (as per Van Pelt and Patico); the verb form regardless of homonym returns 50 hits (unlike the lexicon).

    All of this is in the Help files.
    Thank you very much bobvenem. I know that I can do it for a specific word. But in the situation I described above, I'm not dealing with a specific word. I want to research for all the words occur less than 70 times in Van Pelt and Practico's Vacabulary book, or all the words occur 47 times to 69 times, then change them into another font in order to differentiate them from the rest. How can I achieve this goal? (I guess this is better description of my problem.)

    Thank you.

  4. #4
    Join Date
    Nov 2004
    Posts
    159

    Default

    Sorry about that. However, you should be able to do the same search by part of speech using the wildcard "*" instead of a lemma.Then, by exporting the results to the WLM, you can narrow the results to less than 69 hits for each part of speech list (you would be making a separate word list for each, and then combining them once you have shaved them down to the appropriate limits).

    The only bit of work then would be to eliminate any words which are not in Van Pelt. If you have an electronic list of the words in Van Pelt, you could start off by creating a word list in the GSE; just insert the Van Pelt word list as an inclusion list before searching for parts of speech. That way your word lists will have only the Van Pelt words.

    Hope this makes sense. Also, someone else will surely have a more efficient way to do this.

  5. #5

    Default

    Quote Originally Posted by bobvenem View Post
    Sorry about that. However, you should be able to do the same search by part of speech using the wildcard "*" instead of a lemma.Then, by exporting the results to the WLM, you can narrow the results to less than 69 hits for each part of speech list (you would be making a separate word list for each, and then combining them once you have shaved them down to the appropriate limits).

    The only bit of work then would be to eliminate any words which are not in Van Pelt. If you have an electronic list of the words in Van Pelt, you could start off by creating a word list in the GSE; just insert the Van Pelt word list as an inclusion list before searching for parts of speech. That way your word lists will have only the Van Pelt words.

    Hope this makes sense. Also, someone else will surely have a more efficient way to do this.
    Thank you. the former one seems a little bit too complicated to me. I'm a beginner of BW.

    as the latter one, can you explain a little bit more about that?
    I don't have an electronic list of the exact words in Van Pelt, but I do have the word list of the vocab in his book Basics of Biblical Hebrew, which has most of the words in his Vocabulary book occur more than 70 times. Those words are in their lexical forms (the following picture is an example). Can you tell me more about how to do it with this list?
    Thank you very much!

  6. #6

    Default

    Quote Originally Posted by bobvenem View Post
    Sorry about that. However, you should be able to do the same search by part of speech using the wildcard "*" instead of a lemma.Then, by exporting the results to the WLM, you can narrow the results to less than 69 hits for each part of speech list (you would be making a separate word list for each, and then combining them once you have shaved them down to the appropriate limits).

    The only bit of work then would be to eliminate any words which are not in Van Pelt. If you have an electronic list of the words in Van Pelt, you could start off by creating a word list in the GSE; just insert the Van Pelt word list as an inclusion list before searching for parts of speech. That way your word lists will have only the Van Pelt words.

    Hope this makes sense. Also, someone else will surely have a more efficient way to do this.
    It is strange. I replied days ago and it was shown up.

    Thank you very much for your kindly reply and patience.
    I'm a beginner, your first suggestion seems a little bit too complex to me.
    As for the second one, I don't have an electronic list of the words in Van Pelt, but I have a list of the words in his book Basics of Biblical Hebrew, which fits me better, cause those are the words I have memorized. These words are in Microsoft Word format, and with the vowels.
    Can you tell me more in detail on how to do with this list?
    Thank you very much!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •